Media Sharing Application for Medical Professionals

Media Sharing Application for Medical Professionals

ยท

14 min read

The 2020 year was tough, I don't even need to mention it: people suffered, lost family members, lost jobs... We, IT professionals, are lucky because most of us could easily migrate to home-office, and the IT market, in general, warmed up during the pandemics.

I personally was already working from home since 2012 and I'm also grateful: my family and my friends have been facing the pandemics without any serious harm, and I could accomplish a career transition I was planning for years, although it doesn't do any good for the people lost ๐Ÿ˜ž

My 2021 new year resolutions included learning or improving skills on several technologies like TypeScript, AWS ecosystem, or Next.JS; and I was curious to try Vercel's environment, so the Vercel/Hashnode Hackathon came just in handy for this purpose. Let's dive in!

A Medical app to share content with patients

I was puzzled: two fields that really suffered during pandemics were the educational and medical fields. I wanted to help those areas, and I already work for a company in the educational field, I wanted to try something different!

After studying the pains of the medical area, talking to friends, and a little research, I decided to build a medical web app to share content with patients. It would help the medical professional to keep the patient informed regarding his treatment by allowing him to send audio/video instructions, recipes, medical exam results.

If you are building something from scratch, you should be doing an MVP, and that's what I did: as a minimal viable product, I focused on some specific features:

  • The app should allow the user to record the video and share it with a named patient
  • Allow sharing through Whatsapp or to copy the link
  • Render the shared content on a detail-page

The app itself can be much more than that, but it is a good starting point! After drawing some screens with pen and paper, I was ready to start it up!

The project would use technologies and concepts such as:

  • React/NextJS
  • Dynamodb
  • S3
  • TailwindCSS
  • Typescript
  • Unit testing

You can see a live preview of the project: medishare.vercel.app

Getting started

I started using bare Next.js, but in the very beginning, I met the Vercel CLI and the development experience was amazing! A vercel init nextjs did the trick!

I'm moving from ES6 toward TypeScript, so to kickstart the project I followed these steps:

I didn't want to spend too much time styling the app, and I wanted to avoid an over-bloated ready-made UI-kit. Between styled-components and Tailwind CSS, I decided on the second. The installation instructions were easy to follow.

To test everything up, I replaced the Next.js index file with a simple implementation:

import Head from 'next/head'
import Title from '@atoms/Title';

const Home: React.FC = () => (
  <div>
    <Head>
      <title>Shared Content</title>
      <link rel="icon" href="/favicon.ico" />
    </Head>
    <main>
      <Title>Shared Content</Title>
    </main>
  </div>
);

export default Home;

I also unit-tested it to be sure Jest was all setup:

import { render, screen } from '@testing-library/react';
import "@testing-library/jest-dom/extend-expect";
import Index from '../index';

describe('index page', () => {
  it('should render the index page without errors', () => {
    render(<Index />);
    screen.getByText('Shared Content');
  });
});

Everything โœ… green โœ… here! We can more forward ๐Ÿ˜

Well wait, you probably would want to see the code implemented, don't you? Feel free to grab it from my Github repo! Git log would be your friend: for instance, you can see this kickstart code by visiting this tag. Give me a start if you liked it ๐Ÿ˜‰

Video upload to AWS S3

I was with a specific draft in my mind: I would need a form to record the video, get the recipient's name and, in a given step, post the video file to an API, so I would be able to show links to share or visit a detail page.

I picked up AWS S3 to store the videos uploaded by the user, and it is a good practice to use presigned URLs to send the file directly to S3.

I could use a presigned URL to put the file object, but I wouldn't have much control over the upload (e.g. limiting the filesize). The natural solution was to create a presigned POST URL.

The Shared Content representing the upload would be saved in 4 steps:

  • Create a unique ID to identify the filename (using uuid )
  • Hit an API entry-point to create the presigned post URL
  • POST the file through the obtained URL
  • POST the URL and additional metadata to be stored in a database

Hum, these steps are part of a unique step to be triggered when saving the form, no matter how it happens, so let's write a service call for that:

export default async function createSharedContent({ name, file }: { name: string, file: string }): SharedContent {
  const id = uuidv4()
  const { url, fields } = await getSignedPostUrl({ id });
  await postFile({ url, fields, file });
  return await saveSharedContent({ id, name, url });
}

By isolating this implementation, my form just need to await this function to be executed:

onSubmit = async (data) => {
  await createSharedContent(data);
}

My tests would be easier to write too, since I could mock this service call on my form tests and abstract it. See the whole implementation of this service call, as well as his test, in this tag from my Github repo.

Video upload form

To upload the video, I'm using react-video-recorder, although react-media-recorder is also a promising solution, especially if you need more customization. For my MVP, the former was a better option.

Before starting implementing the form, let's write a test scenario to guide us through the implementation:

  it('should create the share obj after the user record video, set dest and allow share through whatsapp', async () => {
    render(<ShareForm />);
    fireEvent.click(screen.getByRole('button', { name: /record a video/ }));
    assertVideoRecorderWasRendered();

    // After the video has been recorded, the name field would be shown
    const nameField = await screen.findByPlaceholderText(/patient name/);
    fireEvent.change(nameField, { target: { value: recipientName } });
    fireEvent.blur(nameField);
    expect(nameField).toHaveValue(recipientName);

    // After the blur event, we can save the video and just test our createSharedContent function was called!
    screen.getByText('Uploading video, please wait...');
    expect(createSharedContent).toHaveBeenCalledWith({ name: recipientName, file: 'video-blob' });

    // And finally, when successful, the button to actually share the video would be there
    const link = await screen.findByRole('link', { name: 'Whatsapp' });
    const title = 'Video shared';
    const url = `http://domain.tdl/share-${sharedContent.id}`;
    expect(link).toHaveProperty('href', encodeURI(`whatsapp://send?text=${title}: ${url}`));
  });

You can take a look at this tag to see the form implementation.

Server-side implementation

Now the things get exciting! I have a form to record a video, I have a service call that saves this file to a hypothetical API, it is time to implement the API endpoints and, since I'm inside Vercel, that would be done with serverless functions. The cool part of it is that the Vercel CLI allows me to execute the serverless functions locally!

Signed POST URL

Starting by the /api/v1/upload-urls API entry-point, I first created a test using serverless-s3-local to mock a real S3 communication:

    const { status, data } = await axios.post('http://localhost:3000/api/v1/upload-urls', { id });
    expect(status).toEqual(200);
    expect(data.url).toEqual('http://s3.url/bucket-name');
    expect(data.fields['Content-Type']).toEqual('video/webm');
    expect(data.fields['bucket']).toEqual('bucket-name');
    expect(data.fields['key']).toEqual('the-video-uuid.webm');

Basically, I'm expecting my API entry-point to return the URL to POST the file, as well as the fields required by S3 - I'm testing only the fields I have control.

The test is running against the real application running on localhost:3000 (e.g. vercel dev). That's why I used axios, and to do so, a little tweak was needed:

beforeAll(() => {
  axios.defaults.adapter = require('axios/lib/adapters/http');
});

See the full implementation of this API entry-point in this specific tag.

Saving the object to the database

For the MVP, I decided to use DynamoDB and dynamodb-local docker image for the local development. The /api/v1/shared-contents tests would use the previously started docker image to store and retrieve data (I prefer to avoid mocking the database in this case).

The object to be stored would be simple:

SharedContent:
  id: UUID
  name: String  # to store the client name
  filename: String # to store the file URL

It is a preliminary version. I foresee the app holding more than one file per shared object, as well as more complete client data, but let's avoid overengineering for now ๐Ÿคฏ

const payload = SharedContentFactory.build();
const { status, data } = await axios.post('http://localhost:3000/api/v1/shared-contents', payload);
expect(status).toEqual(201);
expect(data).toEqual(payload);

const sharedContent = await SharedContent.get(payload.id);
expect(sharedContent.name).toEqual(payload.name);
expect(sharedContent.filename).toEqual(payload.filename);

You can see this filename to get the implementation of this API entry-point, or you can see the full test implementation.

HINT: it's been a while since I installed AWS CLI, I forgot I was using an alias for a docker service. As the dynamodb-local is also dockerized, I would have to set a common network for both the services (and I spent a little time to recall my AWS CLI was also dockerized ๐Ÿ˜‚). I ended up coordinating them with docker-compose.

Packing up

The MVP is close to done! Some additional adjustments were needed though:

  • Using dynamoose rather than relying directly upon the AWS SDK
  • Implement the retrieve API and the detail page
  • Use signed URLs to safely get the file from S3
  • Some minor user experience tweaks

Take a look at the final implementation by visiting the repository.

Experience so far

I already knew Vercel from his open source projects like NextJS and especially SWR. Serverless and JAMStack concepts/tooling are great and Vercel helps a lot to adopt them. It reduces the devops hassle:

  • I got a CD pipeline for free
  • I could deploy preview environments from specific branches, without dealing with production.
  • The local development experience so far is very amusing too!

Regarding the project itself, it is far from being done, but it was a good starting point!

The next steps would be probably adding some authentication layer, plan and optimize the Dynamodb database, add a CI step in the pipeline, and better use TypeScript.

Thanks for reading this article and please feel free to relay any comment, question, or suggestion!

#VercelHashnode