Reflective Account
Background
The SHU Development Process [1] (shu-dev) is a soware development process and website that can
be followed by students at all levels as they work on both individual and group assessments. It is
based on the generic process framework described by Pressman and Maxim [2], and comprises ve
methodological stages: iniaon, planning, modelling, construcon and deployment. These main
stages encompass a set of support steps: analysis and design related to modelling; code and test
related to construcon; and delivery, support and feedback for deployment.
The current SHU Development Process, shown using an exploded view of the modelling stage [1].
The project has the pedagogical aims of creang A bespoke development process can provide a
framework for students to follow, helping them in choosing appropriated approaches and tools for
their projects and beer engage with group members remotely. This process will also benet tutors
by providing a generic structure that can be tailored to each level of study and modules. We have
designed a soware development process that can be “instanated” into each level of study. The
process has been completely detailed by SHU students and provides a series of guidelines and
suggesons of best pracces”. It’s run by the Applied Soware Engineering Research Group (ASERG)
within the university’s Department of Compung.
Before I joined the project in early 2022, the process was documented through a series of Markdown
formaed text les [3] and made available through a hosted git code repository [4]. Navigang these
documents was clumsy and confusing, as it relied on the Codeberg repository interface to move
between pages. As the project is student-led, I quickly established that the content should instead be
presented through a dedicated website so that it can be styled in a format that is easier to read, kept
as an online resource for high availability, and made available to many devices through responsive
styling. This was especially important due to COVID-19 restricons in-place at the me and
connues to be a benet for teams that work remotely now.
MkDocs [5] was inially selected as it met our needs exactly; to turn markdown les into HTML
webpages. Styling the built-in ‘ReadTheDocs’ theme to match SHU branding was simple, however
customizing the site further, to support inline diagrams, was cumbersome due to the Jinja templang
engine [6] that was used to build HTML pages. MkDocs proved useful as we could produce a ‘good-
enough’ result quickly, but I knew as I was developing it that a technology change would be
necessary.
Aer collang and analysing feedback on the MkDocs site for the 2021-22 academic year, it was clear
that the site was helpful, however students felt they had to navigate through too many links to
access the content they wanted. This was in part due to the level-based and process-based division
of content, which lead to content duplicaon in some places.
The soluon for these issues is the Astro framework [7]. It has nave support for Markdown, so
minimal content changes were needed to migrate and the largest advantage is the ability to mix
components from other frameworks like React, Vue, Svelte, Tailwind, and more into stac HTML. This
is ideal for a site like shu-dev because content updates are infrequent. Custom funconality is also
easier to implement through Rehype [8] and Remark [9]. I wrote plugins and components that will
return plaintext (for search result indexing), add client-side search funconality, and lter content
based on a visitors chosen level of study. I’m concerned that these customizaons may be dicult to
maintain within the project once I leave at the end of the year, however their modular nature means
that it is possible to remove this funconality enrely, or for it to be updated independently of the
rest of the site as an npm package in the future. The current version of the site went live in January
2023, with the old version maintained for posterity on a legacy address [10].
During the academic year 2022-23, the process has been used by some module leaders as inspiraon
for the delivery of their respecve modules. This includes being detailed in the marking scheme for
the level 5 Professional Soware Projects module. The scope of the project is being expanded across
all Soware Engineering modules for the academic year 2023-24, so I’m pleased that the project is
having a posive and lasng impact on my course.
Requirements Engineering
The aims of the project mean that both the gathering of requirements and the measurement of
outcomes is best done through qualitave means; most aconable requirements and feedback have
been collected via open comments from sta and students.
From Sta
The process for validang content thus far has been to map each topic or secon to the sta
responsible for teaching it at each level of study, then ask for their approval and amendments on that
as well as the rest of the site. Sta have not informed the layout or design of the website and aer
the inial deployment, there have been no suggested content changes. Content has been added and
amended by the shu-dev team based on previously agreed lists, or my own use of the site.
From Students
Students have told us what they expect from the site via the annual survey and in-person comments
recorded by the team. Themac analysis has been used on the survey to group feedback and turn it
into priorized, aconable tasks stored as issues on the Codeberg repository. We’ve previously
received comments about physical and cognive accessibility, example consistency, and broken links.
Preferred Pracces
The current pracses work well and allow the team to work around other obligaons, such as
teaching and assignments. Addionally, the overall project requirements have remained consistent
across academic years. Coupled with module content being approved before teaching commences,
there is lile need to increase the frequency of major updates beyond once per year. With that said,
the site is designed to be used all-year, every year, so I feel it would benet from integrang passive
feedback collecon. This could use a rangs paern, modals, or inline hints [11]. The benet of this is
that feedback can submied as-and-when it is noced by users and removing the need for a
separate account & interface, as is required with Codeberg issues, would make giving feedback
easier. My hope is that feedback collected this way would not be reacted upon to in real-me but
folded into the exisng site update workows. Consideraon would need to be given to the structure
of the form so that its as useful as possible. For this reason, I think Socrac Quesoning [12] should
be used. This would make connually elicing requirements and ensuring content is up to date much
easier. A suggested list of quesons could be:
Feedback Locaon:
o This page
o Another page
o Whole site
Feedback Category:
o Content
Jargon
Missing
Typo
Other
o Website
Bug
Accessibility
Feature request
Other
What is your feedback?
What would be an alternave? (viewpoints and perspecves)
Why have you given this feedback? (claricaon)
Its worth nong, such a form would require honeypot elds or CAPTCHAs [13] to avoid malicious
submissions.
Design and Development Techniques
The design of the site has been informed by similar sites like ReadTheDocs [14], microservices.io
[15], and OpenAPI [16]. They use lists and accordian menus [11] that the development team decided
students would be familiar with from across the web, parcularly with code documentaon sites
they’ll also see elsewhere during their studies.
While implemenng HTML changes, I made sure to validate them against WCAG2.1 [17] accessibility
standards so that the site can be used by as many dierently abled students as possible. Axe
Devtools [18] was used extensively the Astro rebuild to do this.
As the content of the process was created before I joined the team, I quickly decided that I’d
dogfood” [19] the process when it came to git workows and task management. This means
I use feature branches when I’m updang content and Kanban as a way of priorizing tasks.
I’m comfortable with the current design and development strategies. No changes should be made to
these pracses.
Test and Deployment Strategy
Current Tesng Strategy
Physical accessibility is tested using Axe Devtools to check for WCAG2.1 compliance, with issues
being xed during development or added to the Codeberg issues for the team to invesgate. Broken
links are checked using deadlinkchecker [20] and drlinkcheck [21]. As I’ve been the only person
developing the site, I didn’t move past manual unit and integration testing during the Astro
rebuild, as migrating content took priority. User Acceptance in the form of feedback surveys
helps the team test the site for system usability and user experience goals.
Current Deployment Strategy
The site is developed locally on feature branches [22], pushed to a ‘development’ branch where it is
then tested online, then pushed to ‘main’ before it is deployed live to students. Scripts are used to
ensure the branch repository structure matches that expected by the Codeberg host. This approach
ensures changes can be tracked and reverted if necessary - the site is maintainable.
Preferred Pracces
My ad-hoc approach to tesng has been the wrong one to take as its too easy to think “well, it works
on my machine”. Fortunately, bringing on another student, Will, to help with development means
they can focus on implemenng automated unit and integraon tesng via Playwright [23]. This has
already been used to nd an issue with the content lter controls on some pages. Further
development will see tests implemented to automacally nd accessibility issues and broken links.
I’m sased with the choice of Playwright for this purpose as it is based on the same Node.js
technology as Astro, so JavaScript can be used throughout the project. It also has a GUI mode so the
team can visually trace the results of tests to nd out exactly why they may fail. Finally, tests can be
triggered from the exisng build script for a full integraon and deployment process, or it will fail if
there are issues.
Evaluaon on Cloud Adopon
Content Hosng
The current site is hosted on Codeberg Pages [24], which is a free service operated by our chosen
remote repository provider. The serverless architecture ts our needs well most of the me as the
website build output is stac, however there have been intermient periods where the site returns
HTTP 500 errors, indicang a lack of resilience with the host. To migate against this, a content
delivery network should be used to cache pages and improve availability. Assuming this could not be
done via exisng SHU IT services, a small budget would be required for a domain and CDN services.
Given the low trac required by the site, enterpise-grade services like AWS CloudFront and Amplify
[aws.amazon.com] are unnecessary due to the amount of work required to set them up. A provider
like Cloudare could be a viable alternave host through their Pages product [25]. I see this issue as a
top priority, given that the site is being adopted across the enre Soware Engineering group.
DevOps & CI/CD
Our current integraon and deployment process has been streamlined using shell scripts. They’re
eecve for deploying code with minimal eort, however they are not connuous because they
must be manually triggered by a member of the shu-dev team. This is a shortcoming of the Codeberg
repository host, as it does not include automated CI/CD features. These are currently marked as
experimental in the upstream Gitea dependency [26].
My preferred pracse would be to move to a repository host that supports automated acons, such
as GitHub. This isn’t an opon as the ASERG team is hosted on Codeberg, so its prudent to stay with
the same provider for the sake of easier access management.
Content Management
Currently, content is created and maintained in Markdown les stored on the project repository. As
the development team is small and input from academics has not been as direct as eding les, this
is sucient. Also, given we work in the compung department, there is a low barrier to entry in
asking sta to raise a git issue or a pull request on the repository. With that said, the git issues have
not been used by outside pares and I can foresee a need to broaden and formalize the content
update process for the site to be maintainable long-term. For this, there are two new routes worth
assessing alongside the use of regular git features:
A content management system (CMS) would allow sta to add, update, and remove content through
a user interface and have those changes reected on the site. As useful as this is, it would introduce
operaonal complexity through the need for a backend to host content, and a review process to
ensure it is high quality and relevant. This means cloud-backed opons like Storyblock [27],
Contenul [28], Sanity [28], or AWS cannot be used. Instea, git-based tools like Tina CMS [30] or
Frontmaer CMS [31] should be invesgated, in order to reduce the eding complexity.
As outlined in ‘Requirements Engineering’ above, an inline feedback widget is an alternave to this,
as content changes can be submied by anyone via Plausible Events [32] and reviewed by the shu-
dev team before being implemented.
An instance of Plausible [33] is being used to anonymously track which pages are being visited,
though I don’t feel enough is being done to ulize this tool. To comply with GDPR regulaons,
analycs should remain as anonymous as possible, however I feel the team could invesgate how to
update the website’s tools and privacy policies to collect more useful informaon, such as which
study-level lters are being used, which terms are being searched for, or to add an opt-in feature for
the inline feedback widget.
Further Suggested Changes
Conversaonal User Interface
The site can be searched using keywords and fuzzy matching [34], which returns lists of related page
secons. Its very eecve and I’m glad the soluon doesn’t rely on a third-party to crawl the site.
However, I’ve seen a rise in use of digital assistants like Apple’s Siri, or chatbots like Bard and
ChatGPT [35], mean that students may become accustomed to querying content using natural
language and expect the same in return. For this reason, future development of the site should
consider technology like Tensorow.Js [36] or the ONNX Web Runme [37] to implement a
lightweight, queson-answering, conversaonal interface (chatbot). It could be implemented within
the search funconality, or a standalone widget and it should target only shu-dev content to
maintain performance, safety, and relevance.
References
1. ASERG. (2023, April 10). SHU Development Process. Retrieved from
hps://aserg.codeberg.page/shu-dev-process/en/introducon
2. Pressman, R. S., & Maxim, B. R. (2010). Soware engineering: A praconer's approach (7th
ed.). McGraw-Hill Higher Educaon.
3. Daring Fireball. (n.d.). Markdown. Retrieved April 10, 2023, from
hps://daringreball.net/projects/markdown/
4. ASERG. (n.d.). Shu-dev-process repository. Retrieved April 10, 2023, from
hps://codeberg.org/aserg/shu-dev-process
5. MkDocs. (n.d.). Documentaon. Retrieved April 10, 2023, from hps://www.mkdocs.org/
6. Pallets. (n.d.). Jinja Documentaon. Retrieved April 10, 2023, from
hps://jinja.palletsprojects.com/en/3.1.x/
7. Astro. (n.d.). Astro Web Framework. Retrieved April 10, 2023, from hps://astro.build/
8. Rehype. (n.d.). Repository. Retrieved April 10, 2023, from
hps://github.com/rehypejs/rehype
9. Remark. (n.d.). Markdown processor. Retrieved April 10, 2023, from hps://remark.js.org/
10. ASERG. (n.d.). SHU Development Process (Legacy version). Retrieved April 10, 2023, from
hps://aserg.codeberg.page/shu-dev-process-legacy/
11. UI Paerns. (n.d.). Retrieved April 21, 2023, from hps://ui-paerns.com/paerns/
12. ASERG. (n.d.). Socrac Quesoning. Retrieved April 21, 2023, from
hps://aserg.codeberg.page/shu-dev-process/en/iniaon/#six-types-of-socrac-quesons
13. Cloudare. (n.d.). How CAPTCHAs Work. Retrieved April 21, 2023, from
hps://www.cloudare.com/en-gb/learning/bots/how-captchas-work/
14. ReadTheDocs. (n.d.). Retrieved April 21, 2023, from hps://readthedocs.org/
15. Microservices.io. (n.d.). Retrieved April 21, 2023, from hps://microservices.io/
16. OpenAPI Iniave. (2020). OpenAPI Specicaon, Version 3.1.0. Retrieved April 21, 2023,
from hps://spec.openapis.org/oas/v3.1.0
17. W3C. (2018). WCAG Content Accessibility Guidelines (WCAG) 2.1. Retrieved April 21, 2023,
from hps://www.w3.org/TR/WCAG21/
18. Deque Systems Inc. (n.d.). Axe DevTools. Retrieved April 21, 2023, from
hps://www.deque.com/axe/devtools/
19. Harrison, W. (2006). Eang Your Own Dog Food. IEEE Soware, 23(3), 5-7.
hps://doi.org/10.1109/MS.2006.72
20. DeadLinkChecker. (n.d.). Retrieved April 21, 2023, from hps://deadlinkchecker.com
21. Dr. Link Check. (n.d.). Retrieved April 21, 2023, from hps://www.drlinkcheck.com/
22. SHU Development Process. (n.d.). Feature Branches – Version Control. Retrieved April 21,
2023, from hps://aserg.codeberg.page/shu-dev-process/en/planning/version-
control/#feature-branches
23. Microso Corporaon. (n.d.). Playwright. Retrieved April 21, 2023, from
hps://playwright.dev/
24. Codeberg Community. (2023, April 10). Codeberg Pages. Retrieved April 10, 2023, from
hps://docs.codeberg.org/codeberg-pages/
25. Cloudare, Inc. (n.d.). Pages. Retrieved April 21, 2023, from hps://pages.cloudare.com/
26. Gitea. (2022, December 1). Gitea Acons. Retrieved April 21, 2023, from
hps://blog.gitea.io/2022/12/feature-preview-gitea-acons/
27. Storyblok GmbH. (n.d.). Storyblok. Retrieved April 21, 2023, from
hps://www.storyblok.com/home
28. Contenul GmbH. (n.d.). Contenul. Retrieved April 21, 2023, from
hps://www.contenul.com/
29. Sanity.io, Inc. (n.d.). Sanity. Retrieved April 21, 2023, from hps://www.sanity.io/
30. Forestry Labs, Inc. (2023, April 11). Tina CMS. Retrieved April 11, 2023, from hps://na.io/
31. Front Maer. (n.d.). Frontmaer CMS. Retrieved April 21, 2023, from
hps://frontmaer.codes/
32. Plausible Analycs AB. (n.d.). Custom Event Goals. Retrieved April 21, 2023, from
hps://plausible.io/docs/custom-event-goals
33. Plausible Analycs AB. (n.d.). Plausible. Retrieved April 21, 2023, from hps://plausible.io/
34. Redis Labs Ltd. (2021, December 6). What is Fuzzy Matching? Retrieved April 10, 2023, from
hps://redis.com/blog/what-is-fuzzy-matching/
35. OpenAI. (n.d.). ChatGPT. Retrieved April 21, 2023, from hps://chat.openai.com/
36. Google LLC. (n.d.). TensorFlow.js. Retrieved April 21, 2023, from
hps://www.tensorow.org/js
37. Microso Corporaon. (n.d.). ONNX Runme. Retrieved April 10, 2023, from
hps://onnxrunme.ai/