Enterprise AI Trend Report: Gain insights on ethical AI, MLOps, generative AI, large language models, and much more.
2024 Cloud survey: Share your insights on microservices, containers, K8s, CI/CD, and DevOps (+ enter a $750 raffle!) for our Trend Reports.
JavaScript (JS) is an object-oriented programming language that allows engineers to produce and implement complex features within web browsers. JavaScript is popular because of its versatility and is preferred as the primary choice unless a specific function is needed. In this Zone, we provide resources that cover popular JS frameworks, server applications, supported data types, and other useful topics for a front-end engineer.
Unleashing the Power of WebAssembly to Herald a New Era in Web Development
Real-Time Communication Protocols: A Developer's Guide With JavaScript
In modern application development, delivering personalized and controlled user experiences is paramount. This necessitates the ability to toggle features dynamically, enabling developers to adapt their applications in response to changing user needs and preferences. Feature flags, also known as feature toggles, have emerged as a critical tool in achieving this flexibility. These flags empower developers to activate or deactivate specific functionalities based on various criteria such as user access, geographic location, or user behavior. React, a popular JavaScript framework known for its component-based architecture, is widely adopted in building user interfaces. Given its modular nature, React applications are particularly well-suited for integrating feature flags seamlessly. In this guide, we'll explore how to integrate feature flags into your React applications using IBM App Configuration, a robust platform designed to manage application features and configurations. By leveraging feature flags and IBM App Configuration, developers can unlock enhanced flexibility and control in their development process, ultimately delivering tailored user experiences with ease. IBM App Configuration can be integrated with any framework be it React, Angular, Java, Go, etc. React is a popular JavaScript framework that uses a component-based architecture, allowing developers to build reusable and modular UI components. This makes it easier to manage complex user interfaces by breaking them down into smaller, self-contained units. Adding feature flags to React components will make it easier for us to handle the components. Integrating With IBM App Configuration IBM App Configuration provides a comprehensive platform for managing feature flags, environments, collections, segments, and more. Before delving into the tutorial, it's important to understand why integrating your React application with IBM App Configuration is necessary and what benefits it offers. By integrating with IBM App Configuration, developers gain the ability to dynamically toggle features on and off within their applications. This capability is crucial for modern application development, as it allows developers to deliver controlled and personalized user experiences. With feature flags, developers can activate or deactivate specific functionalities based on factors such as user access, geographic location, or user preferences. This not only enhances user experiences but also provides developers with greater flexibility and control over feature deployments. Additionally, IBM App Configuration offers segments for targeted rollouts, enabling developers to gradually release features to specific groups of users. Overall, integrating with IBM App Configuration empowers developers to adapt their applications' behavior in real time, improving agility, and enhancing user satisfaction. To begin integrating your React application with App Configuration, follow these steps: 1. Create an Instance Start by creating an instance of IBM App Configuration on cloud.ibm.com. Within the instance, create an environment, such as Dev, to manage your configurations. Now create a collection. Creating collections comes in handy when there are multiple feature flags created for various projects. Each project can have a collection in the same App Configuration instance and you can tag these feature flags to the collection to which they belong. 2. Generate Credentials Access the service credentials section and generate new credentials. These credentials will be required to authenticate your React application with App Configuration. 3. Install SDK In your React application, install the IBM App Configuration React SDK using npm: CSS npm i ibm-appconfiguration-react-client-sdk 4. Configure Provider In your index.js or App.js, wrap your application component with AppConfigProvider to enable AppConfig within your React app. The Provider must be wrapped at the main level of the application, to ensure the entire application has access. The AppConfigProvider requires various parameters as shown in the screenshot below. All of these values can be found in the credentials created. 5. Access Feature Flags Now, within your App Configuration instance, create feature flags to control specific functionalities. Copy the feature flag ID for further integration into your code. Integrating Feature Flags Into React Components Once you've set up the AppConfig in your React application, you can seamlessly integrate feature flags into your components. Enable Components Dynamically Use the feature flag ID copied from the App Configuration instance to toggle specific components based on the flag's status. This allows you to enable or disable features dynamically without redeploying your application. Utilizing Segments for Targeted Rollouts IBM App Configuration offers segments to target specific groups of users, enabling personalized experiences and controlled rollouts. Here's how to leverage segments effectively: Define Segments Create segments based on user properties, behaviors, or other criteria to target specific user groups. Rollout Percentage Adjust the rollout percentage to control the percentage of users who receive the feature within a targeted segment. This enables gradual rollouts or A/B testing scenarios. Example If the rollout percentage is set to 100% and a particular segment is targeted, then the feature is rolled out to all the users in that particular segment. If the rollout percentage is set between 1% to 99% and the rollout percentage is 60%, for example, and a particular segment is targeted, then the feature is rolled out to randomly 60% of the users in that particular segment. If the rollout percentage is set to 0% and a particular segment is targeted, then the feature is rolled out to none of the users in that particular segment. Conclusion Integrating feature flags with IBM App Configuration empowers React developers to implement dynamic feature toggling and targeted rollouts seamlessly. By leveraging feature flags and segments, developers can deliver personalized user experiences while maintaining control over feature deployments. Start integrating feature flags into your React applications today to unlock enhanced flexibility and control in your development process.
NodeJS is a leading software development technology with a wide range of frameworks. These frameworks come with features, templates, and libraries that help developers overcome setbacks and build applications faster with fewer resources. This article takes an in-depth look at NodeJS frameworks in 2024. Read on to discover what they are, their features, and their application. What Is NodeJS? NodeJS is an open-source server environment that runs on various platforms, including Windows, Linux, Unix, Mac OS X, and more. It is free, written in JS, and built on Chrome’s V8 JavaScript engine. Here’s how NodeJS is described on its official website: “NodeJS is a platform built on Chrome’s JavaScript runtime for easily building fast and scalable network applications. As an asynchronous event-driven JavaScript runtime, NodeJS is designed to build scalable network applications… Users of NodeJS are free from worries of dead-locking the process since there are no locks. Almost no function in NodeJS directly performs I/O, so the process never blocks except when the I/O is performed using synchronous methods of the NodeJS standard library. Because nothing blocks, scalable systems are very reasonable to develop in NodeJS.” Ryan Dahl developed this cross-platform runtime tool for building server-side and networking programs. NodeJS makes development easy and fast by offering a wide collection of JS modules, enabling developers to create web applications with higher accuracy and less stress. General Features of NodeJS NodeJS has some distinctive characteristics: Single-Threaded NodeJS utilizes a single-threaded yet scalable style coupled with an event loop model. One of the biggest draws of this setup is that it’s capable of processing multiple requests. With event looping, NodeJS can perform non-blocking input-output operations. Highly Scalable Applications developed with NodeJS are highly scalable because the platform operates asynchronously. It works on a single thread, which enables the system to handle multiple requests simultaneously. Once each response is ready, it is forwarded back to the client. No Buffering NodeJS applications cut down the entire time required for processing by outputting data in blocks with the help of the callback function. They do not buffer any data. Open Source This simply means that the platform is free to use and open to contributions from well-meaning developers. Performance Since NodeJS is built on Google Chrome’s V8 JavaScript engine, it facilitates faster execution of code. Leveraging asynchronous programming and non-blocking concepts, it can offer high-speed performance. The V8 JS engine makes code execution and implementation easier, faster, and more efficient by compiling JavaScript code into machine format. Caching The platform also stands out in its caching ability. It caches modules and makes retrieving web pages faster and easier. With caching, there is no need for the re-execution of codes after the first request. The module can readily be retrieved seamlessly from the application’s memory. License The platform is available under the MIT license. What Are the Top NodeJS Frameworks for the Backend? Frameworks for NodeJS help software architects to develop applications efficiently and with ease. Here are the best NodeJS backend frameworks: 1. Express.js Express.js is an open-source NodeJS module with around 18 million downloads per week, present in more than 20k stacks, and used by over 1,733 companies worldwide. This is a flexible top NodeJS framework with cutting-edge features, enabling developers to build robust single, multi-page, and hybrid web applications. With Express.js, the development of Node-based applications is fast and easy. It is a minimal framework that has many capabilities accessible through plugins. The original developer of Expres.js is TJ Holowaychukand. It was first released on the 22nd of May, 2010. It is widely known and used by leading corporations like Fox Sports, PayPal, Uber, IBM, Twitter, Stack, Accenture, and so on. Key Features of Express.js Here are the features of Express.js: Faster server-side development Great performance: It offers a thin layer of robust application development features without tampering with NodeJS' capabilities. Many tools are based on Express.js Dynamic rendering of HTML pages Enables setting up of middlewares to respond to HTTP requests Very high test coverage Efficient routing Content negotiation Executable for generating applications swiftly Debugging: The framework makes debugging very easy by offering a debugging feature capable of showing developers where the bugs are When To Use Express.js Due to the high-end features outlined above (detailed routing, configuration, security features, and debugging mechanisms), this NodeJS framework is ideal for any enterprise-level or web-based app. That said, it is advisable to do a thorough NodeJS framework comparison before making a choice. 2. Next.js Next.js is an open-source, minimalistic framework for server-rendered React applications. The tool has about 1.8 million downloads, is present in more than 2.7k stacks, and is used by over 800 organizations. Developers leverage the full-stack framework to build highly interactive platforms with SEO-friendly features. Version 12 of the tool was released in October of last year, and this latest version promises to offer the best value. This top NodeJS framework enables React-based web application capabilities like server-side rendering and static page generation. It offers an amazing development experience with great features you need for production, ranging from smart bundling and TypeScript support to server rendering and so on. In addition, no configuration is needed. It makes building fast and user-friendly static websites and web applications easy using React. With Automatic Static Optimization, Next.js builds hybrid applications that feature both statically generated and server-rendered pages. Features of Next.js Here are the key features of Next.js: Great page-based routing API Hybrid pages Automatic code splitting Image optimization Built-in CSS and SaaS support Fully extendable Detailed documentation Faster development Client-side routing with prefetching When To Use Next.js If you are experienced in React, you can leverage Next.js to build a high-demanding app or web app shop. The framework comes with a range of modern web technologies you can use to develop robust, fast, and highly interactive applications. 3. Koa Koa is an open-source backend tech stack with about 1 million downloads per week, present in more than 400 stacks, and used by up to 90 companies. The framework is going for a big jump with version 2. It was built by the same set of developers that built Express. Still, they created it with the purpose of providing something smaller that is more expressive and can offer a stronger foundation for web applications and APIs. This framework stands out because it uses async functions, enabling you to eliminate callbacks and improve bug handling. Koa leverages various tools and methods to make coding web applications and APIs easy and fun. The framework does not bundle any middleware. The tool is similar to other popular middleware technologies; however, it offers a suite of methods that promote interoperability, robustness, and ease of coding middleware. In a nutshell, the capabilities that Koa provides help developers build web applications and APIs faster with higher efficiency. Features of Koa Here are some of the key features that make Koa stand out from other best NodeJS frameworks: The framework is not bundled with any middleware. Small footprint: Being a lightweight and flexible tool, it has a smaller footprint when compared to other NodeJS frameworks. That notwithstanding, you have the flexibility to extend the framework using plugins – you can plug in a wide variety of modules. Contemporary framework: Koa is built using recent technologies and specifications (ECMAScript 2015). As a result, programs developed with it will likely be relevant for an extended period. Bug handling: The framework has features that streamline error handling and make it easier for programmers to spot and get rid of errors. This results in web applications with minimal crashes or issues. Faster development: One of the core goals of top NodeJS frameworks is to make software development faster and more fun. Koa, a lightweight and flexible framework, helps developers to accelerate development with its futuristic technologies. When To Use Koa The same team developed Koa and Express. Express provides features that “augment node,” while Koa was created with the objective to “fix and replace Node.” It stands out because it can simplify error handling and make apps free of callback hell. Instead of Node’s req and res objects, Koa exposes its ctx.request and ctx.response objects. On the flip side, Express augments the node’s req and res objects with extra features like routing and templating, which do not happen with Koa. It’s the ideal framework to use if you want to get rid of callbacks, while Express is suitable when you want to implement NodeJS and conventional NodeJS-style coding. 4. Nest.js Nest.js is a NodeJS framework that is great for developing scalable and efficient server-side applications. Nest has about 800K downloads per week, present in over 1K stacks, and is used by over 200 organizations. It is a progressive framework and an MIT-licensed open-source project. Through official support, an expert from the Nest core team could assist you whenever needed. Nest was developed with TypeScript, uses modern JavaScript, and combines object-oriented programming (OOP), functional programming (FP), and functional reactive programming (FRP). The framework makes application development easy and enables compatibility with a collection of other libraries, including Fastify. Nest stands out from NodeJS frameworks in providing an application architecture for the simplified development of scalable, maintainable, and efficient apps. Features of Nest.js The following are the key features of Nest.js: Nest solves the architecture problem: Even though there are several libraries, helpers, and tools for NodeJS, the server-side architecture problem has not been solved. Thanks to Nest, it offers an application architecture that makes the development of scalable, testable, maintainable, and loosely built applications. Easy to use: Nest.js is a progressive framework that is easy to learn and master. The architecture of this framework is similar to that of Angular, Java, and .Net. As a result, the learning curve is not steep, and developers can easily understand and use this system. It leverages TypeScript. Nest makes application unit testing easy and straightforward Ease of integration: It supports a range of Nest-specific modules. These modules easily integrate with technologies such as TypeORM, Mongoose, and more. It encourages code reusability. Amazing documentation When To Use Nest.js Nest is the ideal framework for the fast and efficient development of applications with simple structures. If you are looking to build apps that are scalable and easy to maintain, Nest is a great option. In addition to being among the fastest-growing NodeJS frameworks, users enjoy a large community and an active support system. With the support platform, developers can receive the official help they need for a dynamic development process, while the Nest community is a great place to interact with other developers and get insights and solutions to common development challenges. 5. Hapi.js This is an open-source NodeJS framework suitable for developing great and scalable web apps. Hapi.js has about 400K downloads per week, present in over 300 stacks, and more than 76 organizations admitted they use Hapi. The framework is ideal for building HTTP-proxy applications, websites, and API servers. Hapi was originally created by Walmart's mobile development team to handle their Black Friday traffic. Since then, it has been improved to become a powerful standalone Node framework that stands out from others with built-in modules and other essential capabilities. Hapi has some out-of-the-box features that enable developers to build scalable applications with minimal overhead. With Hapi, you have got nothing to worry about. The security, simplicity, and satisfaction associated with this framework are everything you need for creating powerful applications and enterprise-grade backend needs. Features of Hapi.js Here are the features that make Hapi one of the best NodeJS frameworks: Security: You do not have to worry about security when using Hapi. Every line of code is thoroughly verified, and there is an advanced security process to ensure the maximum safety of the platform. In addition, Hapi is a leading NodeJS framework with no external code dependencies. Some of the security features and processes include regular updates, end-to-end code hygiene, high-end authentication process, and in-house security architecture. Rich ecosystem: There is a wide range of official plugins. You can easily find a trusted and secure plugin you may need for critical functionalities. With its exhaustive range of plugins, you do not have to risk the security of your project by trusting external middleware – even when it appears to be trustworthy on npm. Quality: When it comes to quantifiable quality metrics, Hapi is one of the frameworks for NodeJS that scores higher than many others. When considering parameters like code clarity, coverage and style, and open issues, Hapi stands out. User experience: The framework enables friction-free development. Being a developer-first platform, there are advanced features to help you speed up some of the processes and increase your productivity. Straightforward implementation: It streamlines the development process and enables you to implement what works directly. The code does exactly what it is created to do; you do not have to waste time experimenting to see what might work or not. Easy-to-learn interface Predictability Extensibility and customization When To Use Hapi.js Hapi does not rely heavily on middleware. Important functionalities like body parsing, input/output validation, HTTP-friendly error objects, and more are integral parts of the framework. There is a wide range of plugins, and it is the only top NodeJS framework that does not depend on external dependencies. With its advanced functionalities, security, and reliability, Hapi stands out from other frameworks like Express (which heavily relies on middleware for a significant part of its capabilities). If you are considering implementing Express for your web app or Rest API project, Hapi is a reliable option. 6. Fastify Fastify is an open-source NodeJS tool with 21.7K stars on Github, 300K weekly downloads, and more than 33 companies have said they use Fastify. This framework provides an outstanding user experience, great plugin architecture, speed, and low overhead. Fastify is inspired by Hapi and Express. Given its performance, it is known as one of the fastest web frameworks. Popular organizations like Skeelo, Satiurn, 2hire, Commons.host, and many more are powered by Fastify. Features of Fastify Fastify is one of the best frameworks for NodeJS. Here are some of its amazing features: Great performance: It is the fastest NodeJS framework with the ability to serve up to 30 thousand requests per second. Fastify focuses on improved responsiveness and user experience, all at a lower cost. Highly extensible: Hooks, decorators, and plugins enable Fastify to be fully extensible. Developer-first framework: The framework is built with coders in mind. It is highly expressive with all the features developers need to build scalable applications faster without compromising quality, performance, and security. If you are looking for a high-performance and developer-friendly framework, Fastify checks off all the boxes. Logging: Due to how crucial and expensive logging is, Fastify works with the best and most affordable logger. TypeScript ready When To Use Fastify This is the ideal framework for building APIs that can handle a lot of traffic. When developing a server, Fastify is a great alternative to Express. If you want a top NodeJS framework that is secure, highly performant, fast, and reliable with low overhead, Fastify stands out as the best option. Conclusion of NodeJS frameworks NodeJS is unarguably a leading software development technology with many reliable and highly performant frameworks. These NodeJS frameworks make application development easier, faster, and more cost-effective. With a well-chosen framework at hand, you are likely to spend fewer resources and time on development – using templates and code libraries. NodeJS frameworks can help you create the type of application you have always wanted. However, the result you get is heavily dependent on the quality of your decision. For instance, choosing a framework that is not the best for the type of project will negatively impact your result. So, make sure you consider the requirements of your project.
Welcome back to the series where we have been building an application with Qwik that incorporates AI tooling from OpenAI. So far we’ve created a pretty cool app that uses AI to generate text and images. Intro and Setup Your First AI Prompt Streaming Responses How Does AI Work Prompt Engineering AI-Generated Images Security and Reliability Deploying Now, there’s just one more thing to do. It’s launch time! I’ll be deploying to Akamai‘s cloud computing services (formerly Linode), but these steps should work with any VPS provider. Let’s do this! Setup Runtime Adapter There are a couple of things we need to get out of the way first: deciding where we are going to run our app, what runtime it will run in, and how the deployment pipeline should look. As I mentioned before, I’ll be deploying to a VPS in Akamai’s connected cloud, but any other VPS should work. For the runtime, I’ll be using Node.js, and I’ll keep the deployment simple by using Git. Qwik is cool because it’s designed to run in multiple JavaScript runtimes. That’s handy, but it also means that our code isn’t ready to run in production as is. Qwik needs to be aware of its runtime environment, which we can do with adapters. We can access see and install available adapters with the command, npm run qwik add. This will prompt us with several options for adapters, integrations, and plugins. For my case, I’ll go down and select the Fastify adapter. It works well on a VPS running Node.js. You can select a different target if you prefer. Once you select your integration, the terminal will show you the changes it’s about to make and prompt you to confirm. You’ll see that it wants to modify some files, create some new ones, install dependencies, and add some new npm scripts. Make sure you’re comfortable with these changes before confirming. Once these changes are installed, your app will have what it needs to run in production. You can test this by building the production assets and running the serve command. (Note: For some reason, npm run build always hangs for me, so I run the client and server build scripts separately). npm run build.client & npm run build.server & npm run serve This will build out our production assets and start the production server listening for requests at http://localhost:3000. If all goes well, you should be able to open that URL in your browser and see your app there. It won’t actually work because it’s missing the OpenAI API keys, but we’ll sort that part out on the production server. Push Changes To Git Repo As mentioned above, this deployment process is going to be focused on simplicity, not automation. So rather than introducing more complex tooling like Docker containers or Kubernetes, we’ll stick to a simpler, but more manual process: using Git to deploy our code. I’ll assume you already have some familiarity with Git and a remote repo you can push to. If not, please go make one now. You’ll need to commit your changes and push it to your repo. git commit -am "ready to commit" & git push origin main Prepare Production Server If you already have a VPS ready, feel free to skip this section. I’ll be deploying to an Akamai VPS. I won’t walk through the step-by-step process for setting up a server, but in case you’re interested, I chose the Nanode 1 GB shared CPU plan for $5/month with the following specs: Operating system: Ubuntu 22.04 LTS Location: Seattle, WA CPU: 1 RAM: 1 GB Storage: 25 GB Transfer: 1 TB Choosing different specs shouldn’t make a difference when it comes to running your app, although some of the commands to install any dependencies may be different. If you’ve never done this before, then try to match what I have above. You can even use a different provider, as long as you’re deploying to a server to which you have SSH access. Once you have your server provisioned and running, you should have a public IP address that looks something like 172.100.100.200. You can log into the server from your terminal with the following command: ssh root@172.100.100.200 You’ll have to provide the root password if you have not already set up an authorized key. We’ll use Git as a convenient tool to get our code from our repo into our server, so that will need to be installed. But before we do that, I always recommend updating the existing software. We can do the update and installation with the following command. sudo apt update && sudo apt install git -y Our server also needs Node.js to run our app. We could install the binary directly, but I prefer to use a tool called NVM, which allows us to easily manage Node versions. We can install it with this command: curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash Once NVM is installed, you can install the latest version of Node with: nvm install node Note that the terminal may say that NVM is not installed. If you exit the server and sign back in, it should work. Upload, Build, and Run App With our server set up, it’s time to get our code installed. With Git, it’s relatively easy. We can copy our code into our server using the clone command. You’ll want to use your own repo, but it should look something like this: git clone https://github.com/AustinGil/versus.git Our source code is now on the server, but it’s still not quite ready to run. We still need to install the NPM dependencies, build the production assets, and provide any environment variables. Let’s do it! First, navigate to the folder where you just cloned the project. I used: cd versus The install is easy enough: npm install The build command is: npm run build However, if you have any type-checking or linting errors, it will hang there. You can either fix the errors (which you probably should) or bypass them and build anyway with this: npm run build.client & npm run build.server The latest version of the project source code has working types if you want to check that. The last step is a bit tricky. As we saw above, environment variables will not be injected from the .env file when running the production app. Instead, we can provide them at runtime right before the serve command like this: OPENAI_API_KEY=your_api_key npm run serve You’ll want to provide your own API key there in order for the OpenAI requests to work. Also, for Node.js deployments, there’s an extra, necessary step. You must also set an ORIGIN variable assigned to the full URL where the app will be running. Qwik needs this information to properly configure their CSRF protection. If you don’t know the URL, you can disable this feature in the /src/entry.preview.tsx file by setting the createQwikCity options property checkOrigin to false: export default createQwikCity({ render, qwikCityPlan, checkOrigin: false }); This process is outlined in more detail in the docs, but it’s recommended not to disable, as CSRF can be quite dangerous. And anyway, you’ll need a URL to deploy the app anyway, so better to just set the ORIGIN environment variable. Note that if you make this change, you’ll want to redeploy and rerun the build and serve commands. If everything is configured correctly and running, you should start seeing the logs from Fastify in the terminal, confirming that the app is up and running. {"level":30,"time":1703810454465,"pid":23834,"hostname":"localhost","msg":"Server listening at http://[::1]:3000"} Unfortunately, accessing the app via IP address and port number doesn’t show the app (at least not for me). This is likely a networking issue, but also something that will be solved in the next section, where we run our app at the root domain. The Missing Steps Technically, the app is deployed, built, and running, but in my opinion, there is a lot to be desired before we can call it “production-ready.” Some tutorials would assume you know how to do the rest, but I don’t want to do you like that. We’re going to cover: Running the app in background mode Restarting the app if the server crashes Accessing the app at the root domain Setting up an SSL certificate One thing you will need to do for yourself is buy the domain name. There are lots of good places. I’ve been a fan of Porkbun and Namesilo. I don’t think there’s a huge difference for which registrar you use, but I like these because they offer WHOIS privacy and email forwarding at no extra charge on top of their already low prices. Before we do anything else on the server, it’ll be a good idea to point your domain name’s A record (@) to the server’s IP address. Doing this sooner can help with propagation times. Now, back in the server, there’s one glaring issue we need to deal with first. When we run the npm run serve command, our app will run as long as we keep the terminal open. Obviously, it would be nice to exit out of the server, close our terminal, and walk away from our computer to go eat pizza without the app crashing. So we’ll want to run that command in the background. There are plenty of ways to accomplish this: Docker, Kubernetes, Pulumis, etc., but I don’t like to add too much complexity. So for a basic app, I like to use PM2, a Node.js process manager with great features, including the ability to run our app in the background. From inside your server, run this command to install PM2 as a global NPM module: npm install -g pm2 Once it’s installed, we can tell PM2 what command to run with the “start” command: pm2 start "npm run serve" PM2 has a lot of really nice features in addition to running our apps in the background. One thing you’ll want to be aware of is the command to view logs from your app: pm2 logs In addition to running our app in the background, PM2 can also be configured to start or restart any process if the server crashes. This is super helpful to avoid downtime. You can set that up with this command: pm2 startup Ok, our app is now running and will continue to run after a server restart. Great! But we still can’t get to it. Lol! My preferred solution is using Caddy. This will resolve the networking issues, work as a great reverse proxy, and take care of the whole SSL process for us. We can follow the install instructions from their documentation and run these five commands: sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https curl curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list sudo apt update sudo apt install caddy Once that’s done, you can go to your server’s IP address and you should see the default Caddy welcome page: Progress! In addition to showing us something is working, this page also gives us some handy information on how to work with Caddy. Ideally, you’ve already pointed your domain name to the server’s IP address. Next, we’ll want to modify the Caddyfile: sudo nano /etc/caddy/Caddyfile As their instructions suggest, we’ll want to replace the :80 line with our domain (or subdomain), but instead of uploading static files or changing the site root, I want to remove (or comment out) the root line and enable the reverse_proxy line, pointing the reverse proxy to my Node.js app running at port 3000. versus.austingil.com { reverse_proxy localhost:3000 } After saving the file and reloading Caddy (systemctl reload caddy), the new Caddyfile changes should take effect. Note that it may take a few moments before the app is fully up and running. This is because one of Caddy’s features is to provision a new SSL certificate for the domain. It also sets up the automatic redirect from HTTP to HTTPS. So now if you go to your domain (or subdomain), you should be redirected to the HTTPS version running a reverse proxy in front of your generative AI application which is resilient to server crashes. How awesome is that!? Using PM2 we can also enable some load-balancing in case you’re running a server with multiple cores. The full PM2 command including environment variables and load-balancing might look something like this: OPENAI_API_KEY=your_api_key ORIGIN=example.com pm2 start "npm run serve" -i max Note that you may need to remove the current instance from PM2 and rerun the start command, you don’t have to restart the Caddy process unless you change the Caddy file, and any changes to the Node.js source code will require a rebuild before running it again. Hell Yeah! We Did It! Alright, that’s it for this blog post and this series. I sincerely hope you enjoyed both and learned some cool things. Today, we covered a lot of things you need to know to deploy an AI-powered application: Runtime adapters Building for production Environment variables Process managers Reverse-proxies SSL certificates If you missed any of the previous posts, be sure to go back and check them out. I’d love to know what you thought about the whole series. If you want, you can play with the app I built. Let me know if you deployed your own app. Also, if you have ideas for topics you’d like me to discuss in the future I’d love to hear them :) UPDATE: If you liked this project and are curious to see what it might look like as a SvelteKit app, check out this blog post by Tim Smith where he converts this existing app over. Thank you so much for reading.
This new era is characterized by the rise of decentralized applications (DApps), which operate on blockchain technology, offering enhanced security, transparency, and user sovereignty. As a full-stack developer, understanding how to build DApps using popular tools like Node.js is not just a skill upgrade; it's a doorway to the future of web development. In this article, we'll explore how Node.js, a versatile JavaScript runtime, can be a powerful tool in the creation of DApps. We'll walk through the basics of Web 3.0 and DApps, the role of Node.js in this new environment, and provide practical guidance on building a basic DApp. Section 1: Understanding the Basics Web 3.0: An Overview Web 3.0, often referred to as the third generation of the internet, is built upon the core concepts of decentralization, openness, and greater user utility. In contrast to Web 2.0, where data is centralized in the hands of a few large companies, Web 3.0 aims to return control and ownership of data back to users. This is achieved through blockchain technology, which allows for decentralized storage and operations. Decentralized Applications (DApps) Explained DApps are applications that run on a decentralized network supported by blockchain technology. Unlike traditional applications, which rely on centralized servers, DApps operate on a peer-to-peer network, which makes them more resistant to censorship and central points of failure. The benefits of DApps include increased security and transparency, reduced risk of data manipulation, and improved trust and privacy for users. However, they also present challenges, such as scalability issues and the need for new development paradigms. Section 2: The Role of Node.js in Web 3.0 Why Node.js for DApp Development Node.js, renowned for its efficiency and scalability in building network applications, stands as an ideal choice for DApp development. Its non-blocking, event-driven architecture makes it well-suited for handling the asynchronous nature of blockchain operations. Here's why Node.js is a key player in the Web 3.0 space: Asynchronous processing: Blockchain transactions are inherently asynchronous. Node.js excels in handling asynchronous operations, making it perfect for managing blockchain transactions and smart contract interactions. Scalability: Node.js can handle numerous concurrent connections with minimal overhead, a critical feature for DApps that might need to scale quickly. Rich ecosystem: Node.js boasts an extensive ecosystem of libraries and tools, including those specifically designed for blockchain-related tasks, such as Web3.js and ethers.js. Community and support: With a large and active community, Node.js offers vast resources for learning and troubleshooting, essential for the relatively new field of Web 3.0 development. Setting up the Development Environment To start developing DApps with Node.js, you need to set up an environment that includes the following tools and frameworks: Node.js: Ensure you have the latest stable version of Node.js installed. NPM (Node Package Manager): Comes with Node.js and is essential for managing packages. Truffle suite: A popular development framework for Ethereum, useful for developing, testing, and deploying smart contracts. Ganache: Part of the Truffle Suite, Ganache allows you to run a personal Ethereum blockchain on your local machine for testing and development purposes. Web3.js or ethers.js libraries: These JavaScript libraries allow you to interact with a local or remote Ethereum node using an HTTP or IPC connection. With these tools, you’re equipped to start building DApps that interact with Ethereum or other blockchain networks. Section 3: Building a Basic Decentralized Application Designing the DApp Architecture Before diving into coding, it's crucial to plan the architecture of your DApp. This involves deciding on the frontend and backend components, the blockchain network to interact with, and how these elements will communicate with each other. Frontend: This is what users will interact with. It can be built with any frontend technology, but in this context, we'll focus on integrating it with a Node.js backend. Backend: The backend will handle business logic, interact with the blockchain, and provide APIs for the front end. Node.js, with its efficient handling of I/O operations, is ideal for this. Blockchain interaction: Your DApp will interact with a blockchain, typically through smart contracts. These are self-executing contracts with the terms of the agreement directly written into code. Developing the Backend With Node.js Setting up a Node.js server: Create a new Node.js project and set up an Express.js server. This server will handle API requests from your front end. Writing smart contracts: You can write smart contracts in Solidity (for Ethereum-based DApps) and deploy them to your blockchain network. Integrating smart contracts with Node.js: Use the Web3.js or ethers.js library to interact with your deployed smart contracts. This integration allows your Node.js server to send transactions and query data from the blockchain. Connecting to a Blockchain Network Choosing a blockchain: Ethereum is a popular choice due to its extensive support and community, but other blockchains like Binance Smart Chain or Polkadot can also be considered based on your DApp’s requirements. Local blockchain development: Use Ganache for a local blockchain environment, which is crucial for development and testing. Integration with Node.js: Utilize Web3.js or ethers.js to connect your Node.js application to the blockchain. These libraries provide functions to interact with the Ethereum blockchain, such as sending transactions, interacting with smart contracts, and querying blockchain data. Section 4: Frontend Development and User Interface Building the Frontend Developing the front end of a DApp involves creating user interfaces that interact seamlessly with the blockchain via your Node.js backend. Here are key steps and considerations: Choosing a framework: While you can use any frontend framework, React.js is a popular choice due to its component-based architecture and efficient state management, which is beneficial for responsive DApp interfaces. Designing the user interface: Focus on simplicity and usability. Remember, DApp users might range from blockchain experts to novices, so clarity and ease of use are paramount. Integrating with the backend: Use RESTful APIs or GraphQL to connect your front end with the Node.js backend. This will allow your application to send and receive data from the server. Interacting With the Blockchain Web3.js or ethers.js on the front end: These libraries can also be used on the client side to interact directly with the blockchain for tasks like initiating transactions or querying smart contract states. Handling transactions: Implement UI elements to show transaction status and gas fees and to facilitate wallet connections (e.g., using MetaMask). Ensuring security and privacy: Implement standard security practices such as SSL/TLS encryption, and be mindful of the data you expose through the front end, considering the public nature of blockchain transactions. User Experience in DApps Educating the user: Given the novel nature of DApps, consider including educational tooltips or guides. Responsive and interactive design: Ensure the UI is responsive and provides real-time feedback, especially important during blockchain transactions which might take longer to complete. Accessibility: Accessibility is often overlooked in DApp development. Ensure that your application is accessible to all users, including those with disabilities. Section 5: Testing and Deployment Testing Your DApp Testing is a critical phase in DApp development, ensuring the reliability and security of your application. Here’s how you can approach it: Unit testing smart contracts: Use frameworks like Truffle or Hardhat for testing your smart contracts. Write tests to cover all functionalities and potential edge cases. Testing the Node.js backend: Implement unit and integration tests for your backend using tools like Mocha and Chai. This ensures your server-side logic and blockchain interactions are functioning correctly. Frontend testing: Use frameworks like Jest (for React apps) to test your frontend components. Ensure that the UI interacts correctly with your backend and displays blockchain data accurately. End-to-end testing: Conduct end-to-end tests to simulate real user interactions across the entire application. Tools like Cypress can automate browser-based interactions. Deployment Strategies for DApps Deploying a DApp involves multiple steps, given its decentralized nature: Smart contract deployment: Deploy your smart contracts to the blockchain. This is typically done on a testnet before moving to the mainnet. Verify and publish your contract source code, if applicable, for transparency. Backend deployment: Choose a cloud provider or a server to host your Node.js backend. Consider using containerization (like Docker) for ease of deployment and scalability. Frontend deployment: Host your front end on a web server. Static site hosts like Netlify or Vercel are popular choices for projects like these. Ensure that the frontend is securely connected to your backend and the blockchain. Post-Deployment Considerations Monitoring and maintenance: Regularly monitor your DApp for any issues, especially performance and security-related. Keep an eye on blockchain network updates that might affect your DApp. User feedback and updates: Be prepared to make updates based on user feedback and ongoing development in the blockchain ecosystem. Community building: Engage with your user community for valuable insights and to foster trust in your DApp. Section 6: Advanced Topics and Best Practices Advanced Node.js Features for DApps Node.js offers a range of advanced features that can enhance the functionality and performance of DApps: Stream API for efficient data handling: Utilize Node.js streams for handling large volumes of data, such as blockchain event logs, efficiently. Cluster module for scalability: Leverage the Cluster module to handle more requests and enhance the performance of your DApp. Using caching for improved performance: Implement caching strategies to reduce load times and enhance user experience. Security Best Practices Security is paramount in DApps due to their decentralized nature and value transfer capabilities: Smart contract security: Conduct thorough audits of smart contracts to prevent vulnerabilities like reentrancy attacks or overflow/underflow. Backend security: Secure your Node.js backend by implementing rate limiting, CORS (Cross-Origin Resource Sharing), and using security modules like Helmet. Frontend security measures: Ensure secure communication between the front end and the back end. Validate user input to prevent XSS (Cross-Site Scripting) and CSRF (Cross-Site Request Forgery) attacks. Performance Optimization Optimizing the performance of DApps is essential for user retention and overall success: Optimize smart contract interactions: Minimize on-chain transactions and optimize smart contract code to reduce gas costs and improve transaction times. Backend optimization: Use load balancing and optimize your database queries to handle high loads efficiently. Frontend performance: Implement lazy loading, efficient state management, and optimize resource loading to speed up your front end. Staying Updated With Web 3.0 Developments Web 3.0 is a rapidly evolving field. Stay updated with the latest developments in blockchain technology, Node.js updates, and emerging standards in the DApp space. Encouraging Community Contributions Open-source contributions can significantly improve the quality of your DApp. Encourage and facilitate community contributions to foster a collaborative development environment. Conclusion The journey into the realm of Web 3.0 and decentralized applications is not just a technological leap but a step towards a new era of the internet — one that is more secure, transparent, and user-centric. Through this article, we've explored how Node.js, a robust and versatile technology, plays a crucial role in building DApps, offering the scalability, efficiency, and rich ecosystem necessary for effective development. From understanding the basics of Web 3.0 and DApps, diving into the practicalities of using Node.js, to detailing the nuances of frontend and backend development, testing, deployment, and best practices, we have covered a comprehensive guide for anyone looking to embark on this exciting journey. As you delve into the world of decentralized applications, remember that this field is constantly evolving. Continuous learning, experimenting, and adapting to new technologies and practices are key. Engage with the community, contribute to open-source projects, and stay abreast of the latest trends in blockchain and Web 3.0. The future of the web is decentralized, and as a developer, you have the opportunity to be at the forefront of this revolution. Embrace the challenge, and use your skills and creativity to build applications that contribute to a more open, secure, and user-empowered internet.
We just published a new ScyllaDB sample application, a video streaming app. The project is available on GitHub. This blog covers the video streaming application’s features and tech stack and breaks down the data modeling process. Video Streaming App Features The app has a minimal design with the most essential video streaming application features: List all videos, sorted by creation date (home page) List videos that you started watching Watch video Continue watching a video where you left off Display a progress bar under each video thumbnail Technology Stack Programming language: TypeScript Database: ScyllaDB Framework: NextJS (pages router) Component library: Material_UI Using ScyllaDB for Low-Latency Video Streaming Applications ScyllaDB is a low-latency and high-performance NoSQL database compatible with Apache Cassandra and DynamoDB. It is well-suited to handle the large-scale data storage and retrieval requirements of video streaming applications. ScyllaDB has drivers in all the popular programming languages, and, as this sample application demonstrates, it integrates well with modern web development frameworks like NextJS. Low latency in the context of video streaming services is crucial for delivering a seamless user experience. To lay the groundwork for high performance, you need to design a data model that fits your needs. Let’s continue with an example data modeling process to see what that looks like. Video Streaming App Data Modeling In the ScyllaDB University Data Modeling course, we teach that NoSQL data modeling should always start with your application and queries first. Then, you work backward and create the schema based on the queries you want to run in your app. This process ensures that you create a data model that fits your queries and meets your requirements. With that in mind, let’s go over the queries that our video streaming app needs to run on each page load! Page: Continue Watching On this page, you can list all the videos that they’ve started to watch. This view includes the video thumbnails and the progress bar under the thumbnail. Query: Get Watch Progress CQL SELECT video_id, progress FROM watch_history WHERE user_id = ? LIMIT 9; Schema: Watch History Table CQL CREATE TABLE watch_history ( user_id text, video_id text, progress int, watched_at timestamp, PRIMARY KEY (user_id) ); For this query, it makes sense to define user_id as the partition key because that is the filter we use to query the watch history table. Keep in mind that this schema might need to be updated later if there is a query that requires filtering on other columns beyond the user_id. For now, though, this schema is correct for the defined query. Besides the progress value, the app also needs to fetch the actual metadata of each video (for example, the title and the thumbnail image). For this, the `video` table has to be queried. Query: Get Video Metadata CQL SELECT * FROM video WHERE id IN ?; Notice how we use the “IN” operator and not “=” because we need to fetch a list of videos, not just a single video. Schema: Video Table CQL CREATE TABLE video ( id text, content_type text, title text, url text, thumbnail text, created_at timestamp, duration int, PRIMARY KEY (id) ); For the video table, let’s define the id as the partition key because that’s the only filter we use in the query. Page: Watch Video If you click on any of the “Watch” buttons, they will be redirected to a page with a video player where they can start and pause the video. Query: Get Video Content CQL SELECT * FROM video WHERE id = ?; This is a very similar query to the one that runs on the Continue Watching page. Thus, the same schema will work just fine for this query as well. Schema: Video Table CQL CREATE TABLE video ( id text, content_type text, title text, url text, thumbnail text, created_at timestamp, duration int, PRIMARY KEY (id) ); Page: Most Recent Videos Finally, let’s break down the Most Recent Videos page, which is the home page of the application. We analyze this page last because it is the most complex one from a data modeling perspective. This page lists ten of the most recently uploaded videos that are available in the database, ordered by the video creation date. We will have to fetch these videos in two steps: first, get the timestamps, then get the actual video content. Query: Get the Most Recent Ten Videos’ Timestamp CQL SELECT id, top10(created_at) AS date FROM recent_videos; You might notice that we use a custom function called top10(). This is not a standard function in ScyllaDB. It’s a UDF (user-defined function) that we created to solve this data modeling problem. This function returns an array of the most recent created_at timestamps in the table. Creating a new UDF in ScyllaDB can be a great way to solve your unique data modeling challenges. These timestamp values can then be used to query the actual video content that we want to show on the page. Query: Get Metadata for Those Videos CQL SELECT * FROM recent_videos WHERE created_at IN ? LIMIT 10; Schema: Recent Videos CQL CREATE MATERIALIZED VIEW recent_videos_view AS SELECT * FROM streaming.video WHERE created_at IS NOT NULL PRIMARY KEY (created_at, id); In the recent videos' materialized view, the created_at column is the primary key because we filter by that column in our first query to get the most recent timestamp values. Be aware that, in some cases, this can cause a hot partition. Furthermore, the UI also shows a small progress bar under each video’s thumbnail which indicates the progress you made watching that video. To fetch this value for each video, the app has to query the watch history table. Query: Get Watch Progress for Each Video CQL SELECT progress FROM watch_history WHERE user_id = ? AND video_id = ?; Schema: Watch History CQL CREATE TABLE watch_history ( user_id text, video_id text, progress int, watched_at timestamp, PRIMARY KEY (user_id, video_id) ); You might have noticed that the watch history table was already used in a previous query to fetch data. Now this time, the schema has to be modified slightly to fit this query. Let’s add video_id as a clustering key. This way, the query to fetch watch progress will work correctly. That’s it. Now, let’s see the final database schema! Final Database Schema CQL CREATE KEYSPACE IF NOT EXISTS streaming WITH replication = { 'class': 'NetworkTopologyStrategy', 'replication_factor': '3' }; CREATE TABLE streaming.video ( id text, content_type text, title text, url text, thumbnail text, created_at timestamp, duration int, PRIMARY KEY (id) ); CREATE TABLE streaming.watch_history ( user_id text, video_id text, progress int, watched_at timestamp, PRIMARY KEY (user_id, video_id) ); CREATE TABLE streaming.recent_videos ( id text, content_type text, title text, url text, thumbnail text, created_at timestamp, duration int, PRIMARY KEY (created_at) ); User-Defined Function for the Most Recent Videos Page CQL -- Create a UDF for recent videos CREATE OR REPLACE FUNCTION state_f(acc list<timestamp>, val timestamp) CALLED ON NULL INPUT RETURNS list<timestamp> LANGUAGE lua AS $$ if val == nil then return acc end if acc == nil then acc = {} end table.insert(acc, val) table.sort(acc, function(a, b) return a > b end) if #acc > 10 then table.remove(acc, 11) end return acc $$; CREATE OR REPLACE FUNCTION reduce_f(acc1 list<timestamp>, acc2 list<timestamp>) CALLED ON NULL INPUT RETURNS list<timestamp> LANGUAGE lua AS $$ result = {} i = 1 j = 1 while #result < 10 do if acc1[i] > acc2[j] then table.insert(result, acc1[i]) i = i + 1 else table.insert(result, acc2[j]) j = j + 1 end end return result $$; CREATE OR REPLACE AGGREGATE top10(timestamp) SFUNC state_f STYPE list<timestamp> REDUCEFUNC reduce_f; This UDF uses Lua, but you could also use Wasm to create UDFs in ScyllaDB. Creating the function make sure to enable UDFs in the scylla.yaml configuration file (location: /etc/scylla/scylla.yaml): Clone the Repo and Get Started! To get started… Clone the repository:git clone https://github.com/scylladb/video-streaming Install the dependencies:npm install Modify the configuration file: Plain Text APP_BASE_URL="http://localhost:8000" SCYLLA_HOSTS="172.17.0.2" SCYLLA_USER="scylla" SCYLLA_PASSWD="xxxxx" SCYLLA_KEYSPACE="streaming" SCYLLA_DATACENTER="datacenter1" Migrate the database and insert sample data:npm run migrate Run the server:npm run dev Wrapping Up We hope you enjoy our video streaming app, and it helps you build low-latency and high-performance applications with ScyllaDB. If you want to keep on learning, check out ScyllaDB University, where we have free courses on data modeling, ScyllaDB drivers, and much more! If you have questions about the video streaming sample app or ScyllaDB, go to our forum, and let’s discuss! More ScyllaDB sample applications: CarePet – IoT Cloud Getting Started guide Feature Store Relevant resources: Video streaming app GitHub repository UDFs in ScyllaDB How ScyllaDB Distributed Aggregates Reduce Query Execution Time up to 20X Wasmtime: Supporting UDFs in ScyllaDB with WebAssembly ScyllaDB documentation
"We will soon migrate to TypeScript, and then. . . " How often do you hear this phrase? Perhaps, if you mainly work within a single project or mostly just start new projects from scratch, this is a relatively rare expression for you to hear. For me, as someone working in an outsourcing company, who, in addition to my main project, sees dozens of various other projects every month, it is a quite common phrase from the development team or a client who would like to upgrade their project stack for easier team collaboration. Spoiler alert: it is probably not going to be as soon of a transition as you think (most likely, never). While it may sound drastic, in most cases, this will indeed be the case. Most people who have not undergone such a transition may not be aware of the dozens of nuances that can arise during a project migration to TypeScript. For instance, are you prepared for the possibility that your project build, which took tens of seconds in pure JavaScript, might suddenly start taking tens of minutes when using TypeScript? Of course, it depends on your project's size, your pipeline configuration, etc., but these scenarios are not fabricated. You, as a developer, might be prepared for this inevitability, but what will your client think when you tell them that the budget for the server instance needs to be increased because the project build is now failing due to a heap out-of-memory error after adding TypeScript to the project? Yes, TypeScript, like any other tool, is not free. On the Internet, you can find a large number of articles about how leading companies successfully migrated their projects from pure JavaScript to TypeScript. While they usually describe a lot of the issues they had during the transition and how they overcame them, there are still many unspoken obstacles that people can encounter which can become critical to your migration. Despite the awareness among most teams that adding typing to their projects through migration to TypeScript might not proceed as smoothly as depicted in various articles, they still consider TypeScript as the exclusive and definitive solution to address typing issues in their projects. This mindset can result in projects remaining in pure JavaScript for extended periods, and the eagerly anticipated typing remains confined to the realm of dreams. While alternative tools for introducing typing to JavaScript code do exist, TypeScript's overwhelming popularity often casts them into the shadows. This widespread acclaim, justified by the TypeScript team's active development, may, however, prove disadvantageous to developers. Developers tend to lean towards the perception that TypeScript is the only solution to typing challenges in a project, neglecting other options. Next, we will consider JSDoc as a tool that, when used correctly and understood in conjunction with other tools (like TypeScript), can help address the typing issue in a project virtually for free. Many might think that the functionality of JSDoc pales in comparison to TypeScript, and comparing them is unfair. To some extent, that is true, but on the other hand, it depends on the perspective. Each technology has its pros and cons, counterbalancing the other. Code examples will illustrate a kind of graceful degradation from TypeScript to JavaScript while maintaining typing functionality. While for some, this might appear as a form of progressive enhancement, again, it all depends on how you look at it. TypeScript to JSDoc: My vanilla JavaScript enums JSDoc and Its Extensions JSDoc is a specification for the comment format in JavaScript. This specification allows developers to describe the structure of their code, data types, function parameters, and much more using special comments. These comments can then be transformed into documentation using appropriate tools. JavaScript /** * Adds two numbers. * @param {number} a - The first number. * @param {number} b - The second number. * @returns {number} The sum of the two numbers. */ const getSum = (a, b) => { return a + b } Initially, JSDoc was created with the goal of generating documentation based on comments, and this functionality remains a significant part of the tool. However, it is not the only aspect. The second substantial aspect of the tool is the description of various types within the program: variable types, object types, function parameters, and many other structures. Since the fate of ECMAScript 4 was uncertain, and many developers lacked (and still lack to this day) proper typing, JSDoc started adding this much-needed typing to JavaScript. This contributed to its popularity, and as a result, many other tools began to rely on the JSDoc syntax. An interesting fact is that while the JSDoc documentation provides a list of basic tags, the specification itself allows developers to expand the list based on their needs. Tools built on top of JSDoc leverage this flexibility to the maximum by adding their own custom tags. Therefore, encountering a pure JSDoc setup is a relatively rare occurrence. TypeScript to JSDoc: Function typing The most well-known tools that rely on JSDoc are Closure Compiler (not to be confused with the Closure programming language) and TypeScript. Both of these tools can help make your JavaScript typed, but they approach it differently. Closure Compiler primarily focuses on enhancing your .js files by adding typing through JSDoc annotations (after all, they are just comments), while TypeScript is designed for .ts files, introducing its own well-known TypeScript constructs such as type, interface, enum, namespace, and so on. Not from its inception, but starting from version 2.3, TypeScript began allowing something similar to Closure Compiler – checking type annotations in .js files based on the use of JSDoc syntax. With this version, and with each subsequent version, TypeScript not only added support for JSDoc but also incorporated many of the core tags and constructs present in the Closure Compiler. This made migration to TypeScript more straightforward. While Closure Compiler is still being updated, used by some teams, and remains the most effective tool for code compression in JavaScript (if its rules are followed), due to support for checking .js files and various other updates brought by the TypeScript team, Closure Compiler eventually lost to TypeScript. From the implementation perspective, incorporating an understanding of JSDoc notation into TypeScript is not a fundamental change. Whether it is TypeScript types or JSDoc types, ultimately, they both become part of the AST (Abstract Syntax Tree) of the executed program. This is convenient for us as developers because all our everyday tools, such as ESLint (including all its plugins), Prettier, and others primarily rely on the AST. Therefore, regardless of the file extensions we use, our favorite plugins can continue to work in both .js and .ts files (with some exceptions, of course). TypeScript to JSDoc: Type declaration Developer Experience When adding typing to JavaScript code using JSDoc, it is advisable to use additional tools that enhance the development experience. eslint-plugin-jsdoc is a JSDoc plugin for ESLint. This plugin reports errors in case of invalid JSDoc syntax usage and helps standardize the written JSDoc. An important setting for this plugin is the mode option, which offers one of the following values: typescript, closure (referring to Closure Compiler), or jsdoc. As mentioned earlier, JSDoc can vary, and this option allows you to specify which JSDoc tags and syntax to use. The default value is typescript (though this has not always been the case), which, given TypeScript's dominance over other tools that work with JSDoc, seems like a sensible choice. TypeScript to JSDoc: Type casting It is also important to choose a tool for analyzing the type annotations written in JSDoc, and in this case, it will be TypeScript. This might sound strange because, in this article, it seems like we are discussing its replacement. However, we are not using TypeScript for its primary purpose – our files still have the .js extension. We will only use TypeScript as a type checking linter. In most projects where TypeScript is used fully, there is typically a build script responsible for compiling .ts files into .js. In the case of using TypeScript as a linting tool, instead of a buildcommand handling compilation, you will have a command for linting your types. JavaScript // package.json { "scripts": { "lint:type": "tsc --noEmit" } } If, in the future, a tool emerges that surpasses TypeScript as a linting tool for project typing, we can always replace it in this script. To make this script work correctly, you need to create a tsconfig.json file in your project or add additional parameters to this script. These parameters include allowJs and checkJs, which allow TypeScript to check code written in .js files. In addition to these parameters, you can enable many others. For example, to make type checking stricter, you can use strict, noUncheckedIndexedAccess, exactOptionalPropertyTypes, noPropertyAccessFromIndexSignature, and more. TypeScript will rigorously check your code even if you are using .js files. The TypeScript team consistently enhances the functionality of TypeScript when working with JSDoc. With almost every release, they introduce both fixes and new features. The same applies to code editors. Syntax highlighting and other DX features provided by TypeScript when working with .ts files also work when dealing with .js files using JSDoc. Although there are occasional instances where support for certain JSDoc features may come later, many GitHub issues labeled with JSDoc in the TypeScript backlog indicate that the TypeScript team continues to work on improving JSDoc support. TypeScript to JSDoc: Generics Many might mention the nuance that when using TypeScript solely for .js files, you are deprived of the ability to use additional constructs provided by TypeScript; for example, Enums, Namespaces, Class Parameter Properties, Abstract Classes and Members, Experimental (!) Decorators, and others, as their syntax is only available in files with the .ts extension. Again, for some, this may seem like a drawback, but for others, it could be considered a benefit, as most of these constructs have their drawbacks. Primarily, during TypeScript compilation to JavaScript, anything related to types simply disappears. In the case of using the aforementioned constructs, all of them are translated into less-than-optimal JavaScript code. If this does not sound compelling enough for you to refrain from using them, you can explore the downsides of each of these constructs on your own, as there are plenty of articles on the Internet discussing these issues. Overall, the use of these constructs is generally considered an anti-pattern. On most of my projects where I use TypeScript to its full extent (with all my code residing in .ts files), I always use a custom ESLint rule: JavaScript // eslint.config.js /** @type {import('eslint').Linter.FlatConfig} */ const config = { rules: { 'no-restricted-syntax': [ 'error', { selector: 'TSEnumDeclaration,TSModuleDeclaration,TSParameterProperty,ClassDeclaration[abstract=true],Decorator', message: 'TypeScript shit is forbidden.', }, ], }, } This rule prohibits the use of TypeScript constructs that raise concerns. When considering what remains of TypeScript when applying this ESLint rule, essentially, only the typing aspect remains. In this context, when using this rule, leveraging JSDoc tags and syntax provided by TypeScript for adding typing to .js files is almost indistinguishable from using TypeScript with .ts files. TypeScript to JSDoc: Class and its members As mentioned earlier, most tools rely on AST for their operations, including TypeScript. TypeScript does not care whether you define types using TypeScript's keywords and syntax or JSDoc tags supported by TypeScript. This principle also applies to ESLint and its plugins, including the typescript-eslint plugin. This means that we can use this plugin and its powerful rules to check typing even if the entire code is written in .js files (provided you enabled the appropriate parser). Unfortunately, a significant drawback when using these tools with .js files is that some parts of these tools, such as specific rules in typescript-eslint, rely on the use of specific TypeScript keywords. Examples of such rules include explicit-function-return-type, explicit-member-accessibility, no-unsafe-return, and others that are tied explicitly to TypeScript keywords. Fortunately, there are not many such rules. Despite the fact that these rules could be rewritten to use AST, the development teams behind these rules are currently reluctant to do so due to the increased complexity of support when transitioning from using keywords to AST. Conclusion JSDoc, when used alongside TypeScript as a linting tool, provides developers with a powerful means of typing .js files. Its functionality does not lag significantly behind TypeScript when used to its full potential, keeping all the code in .ts files. Utilizing JSDoc allows developers to introduce typing into a pure JavaScript project right now, without delaying it as a distant dream of a full migration to TypeScript (which most likely will never happen). Many mistakenly spend too much time critiquing the JSDoc syntax, deeming it ugly, especially when compared to TypeScript. It is hard to argue otherwise, TypeScript's syntax does indeed look much more concise. However, what is truly a mistake is engaging in empty discussions about syntax instead of taking any action. In the end, you will probably want to achieve a similar result, as shown in the screenshot below. Performing such a migration is significantly easier and more feasible when transitioning from code that already has typing written in JSDoc. JSDoc to TypeScript: Possibly a long-awaited migration; React Component By the way, many who label the JSDoc syntax as ugly, while using TypeScript as their sole primary tool, after such remarks, nonchalantly return to their .ts files, fully embracing TS Enums, TS Parameter Properties, TS Experimental (!) Decorators, and other TS constructs that might raise questions. Do they truly believe they are on the right side? Most of the screenshots were taken from the migration of .ts files to .js while preserving type functionality in my library form-payload (here is the PR). Why did I decide to make this migration? Because I wanted to. Although this is far from my only experience with such migrations. Interestingly, the sides of migrations often change (migrations from .js to .ts undoubtedly occur more frequently). Despite my affection for TypeScript and its concise syntax, after several dozen files written/rewritten using JSDoc, I stopped feeling any particular aversion to the JSDoc syntax, as it is just syntax. Summing Up JSDoc provides developers with real opportunities for gradually improving the codebase without requiring a complete transition to TypeScript from the start of migration. It is essential to remember that the key is not to pray to the TypeScript-god but to start taking action. The ultimate transition to using TypeScript fully is possible, but you might also realize that JSDoc is more than sufficient for successful development, as it has its advantages. For example, here is what a "JSDoc-compiler" might look like: JavaScript // bundler.js await esbuild.build({ entryPoints: [jsMainEntryPoint], minify: true, // ✓ }) Give it a try! Do not stand still, continually develop your project, and I am sure you will find many other benefits!
Welcome back to this series where we’re building web applications with AI tooling. Intro and Setup Your First AI Prompt Streaming Responses How Does AI Work Prompt Engineering AI-Generated Images Security and Reliability Deploying In the previous post, we got AI-generated jokes into our Qwik application from OpenAI API. It worked, but the user experience suffered because we had to wait until the API completed the entire response before updating the client. A better experience, as you’ll know if you’ve used any AI chat tools, is to respond as soon as each bit of text is generated. It becomes a sort of teletype effect. That’s what we’re going to build today using HTTP streams. Prerequisites Before we get into streams, we need to explore something with a Qwik quirk related to HTTP requests. If we examine the current POST request being sent by the form, we can see that the returned payload isn’t just the plain text we returned from our action handler. Instead, it’s this sort of serialized data. This is the result of how the Qwik Optimizer lazy loads assets, and is necessary to properly handle the data as it comes back. Unfortunately, this prevents standard streaming responses. So while routeAction$ and the Form component are super handy, we’ll have to do something else. To their credit, the Qwik team does provide a well-documented approach for streaming responses. However, it involves their server$ function and async generator functions. This would probably be the right approach if we’re talking strictly about Qwik, but this series is for everyone. I’ll avoid this implementation, as it’s too specific to Qwik, and focus on broadly applicable concepts instead. Refactor Server Logic It sucks that we can’t use route actions because they’re great. So what can we use? Qwik City offers a few options. The best I found is middleware. They provide enough access to primitive tools that we can accomplish what we need, and the concepts will apply to other contexts besides Qwik. Middleware is essentially a set of functions that we can inject at various points within the request lifecycle of our route handler. We can define them by exporting named constants for the hooks we want to target (onRequest, onGet, onPost, onPut, onDelete). So instead of relying on a route action, we can use a middleware that hooks into any POST request by exporting an onPost middleware. In order to support streaming, we’ll want to return a standard Response object. We can do so by creating a Response object and passing it to the requestEvent.send() method. Here’s a basic (non-streaming) example: /** @type {import('@builder.io/qwik-city').RequestHandler} */ export const onPost = (requestEvent) => { requestEvent.send(new Response('Hello Squirrel!')) } Before we tackle streaming, let’s get the same functionality from the old route action implemented with middleware. We can copy most of the code into the onPost middleware, but we won’t have access to formData. Fortunately, we can recreate that data from the requestEvent.parseBody() method. We’ll also want to use requestEvent.send() to respond with the OpenAI data instead of a return statement. /** @type {import('@builder.io/qwik-city').RequestHandler} */ export const onPost = async (requestEvent) => { const OPENAI_API_KEY = requestEvent.env.get('OPENAI_API_KEY') const formData = await requestEvent.parseBody() const prompt = formData.prompt const body = { model: 'gpt-3.5-turbo', messages: [{ role: 'user', content: prompt }] } const response = await fetch('https://api.openai.com/v1/chat/completions', { // ... fetch options }) const data = await response.json() const responseBody = data.choices[0].message.content requestEvent.send(new Response(responseBody)) } Refactor Client Logic Replacing the route actions has the unfortunate side effect of meaning we also can’t use the <Form> component anymore. We’ll have to use a regular HTML <form> element and recreate all the benefits we had before, including sending HTTP requests with JavaScript, tracking the loading state, and accessing the results. Let’s refactor our client side to support those features again. We can break these requirements down to needing two things, a JavaScript solution for submitting forms and a reactive state for managing loading states and results. I’ve covered submitting HTML forms with JavaScript in depth several times in the past: Make Beautifully Resilient Apps With Progressive Enhancement File Uploads for the Web (2): Upload Files With JavaScript Building Super Powered HTML Forms with JavaScript So today I’ll just share the snippet, which I put inside a utils.js file in the root of my project. This jsFormSubmit function accepts an HTMLFormElement then constructs a fetch request based on the form attributes and returns the resulting Promise: /** * @param {HTMLFormElement} form */ export function jsFormSubmit(form) { const url = new URL(form.action) const formData = new FormData(form) const searchParameters = new URLSearchParams(formData) /** @type {Parameters<typeof fetch>[1]} */ const fetchOptions = { method: form.method } if (form.method.toLowerCase() === 'post') { fetchOptions.body = form.enctype === 'multipart/form-data' ? formData : searchParameters } else { url.search = searchParameters } return fetch(url, fetchOptions) } This generic function can be used to submit any HTML form, so it’s handy to use in a submit event handler. Sweet! As for the reactive data, Qwik provides two options, useStore and useSignal. I prefer useStore, which allows us to create an object whose properties are reactive - meaning changes to the object’s properties will automatically be reflected wherever they are referenced in the UI. We can use useStore to create a “state” object in our component to track the loading state of the HTTP request as well as the text response. import { $, component$, useStore } from "@builder.io/qwik"; // other setup logic export default component$(() => { const state = useStore({ isLoading: false, text: '', }) // other component logic }) Next, we can update the template. Since we can no longer use the action object we had before, we can replace references from action.isRunning and action.value to state.isLoading and state.text, respectively (don’t ask me why I changed the property names). I’ll also add a “submit” event handler to the form called handleSbumit, which we’ll look at shortly. <main> <form method="post" preventdefault:submit onSubmit$={handleSubmit} > <div> <label for="prompt">Prompt</label> <textarea name="prompt" id="prompt"> Tell me a joke </textarea> </div> <button type="submit" aria-disabled={state.isLoading}> {state.isLoading ? 'One sec...' : 'Tell me'} </button> </form> {state.text && ( <article> <p>{state.text}</p> </article> )} </main> Note that the <form> does not explicitly provide an action attribute. By default, an HTML form will submit data to the current URL, so we only need to set the method to POST and submit this form to trigger the onPost middleware we defined earlier. Now, the last step to get this refactor working is defining handleSubmit. Just like we did in the previous post, we need to wrap an event handler inside Qwik’s $ function. Inside the event handler, we’ll want to clear out any previous data from state.text, set state.isLoading to true, then pass the form’s DOM node to our fancy jsFormSubmit function. This should submit the HTTP request for us. Once it comes back, we can update state.text with the response body, and return state.isLoading to false. const handleSubmit = $(async (event) => { state.text = '' state.isLoading = true /** @type {HTMLFormElement} */ const form = event.target const response = await jsFormSubmit(form) state.text = await response.text() state.isLoading = false }) OK! We should now have a client-side form that uses JavaScript to submit an HTTP request to the server while tracking the loading and response states, and updating the UI accordingly. That was a lot of work to get the same solution we had before but with fewer features. But the key benefit is we now have direct access to the platform primitives we need to support streaming. Enable Streaming on the Server Before we start streaming responses from OpenAI, I think it’s helpful to start with a very basic example to get a better grasp of streams. Streams allow us to send small chunks of data over time. So as an example, let’s print out some iconic David Bowie lyrics in tempo with the song, “Space Oddity." When we construct our Response object, instead of passing plain text, we’ll want to pass a stream. We’ll create the stream shortly, but here’s the idea: /** @type {import('@builder.io/qwik-city').RequestHandler} */ export const onPost = (requestEvent) => { requestEvent.send(new Response(stream)) } We’ll create a very rudimentary ReadableStream using the ReadableStream constructor and pass it as an optional parameter. This optional parameter can be an object with a start method that’s called when the stream is constructed. The start method is responsible for the steam’s logic and has access to the stream controller, which is used to send data and close the stream. const stream = new ReadableStream({ start(controller) { // Stream logic goes here } }) OK, let’s plan out that logic. We’ll have an array of song lyrics and a function to "sing" them (pass them to the stream). The sing function will take the first item in the array and pass that to the stream using the controller.enqueue() method. If it’s the last lyric in the list, we can close the stream with controller.close(). Otherwise, the sing method can call itself again after a short pause. const stream = new ReadableStream({ start(controller) { const lyrics = ['Ground', ' control', ' to major', ' Tom.'] function sing() { const lyric = lyrics.shift() controller.enqueue(lyric) if (lyrics.length < 1) { controller.close() } else { setTimeout(sing, 1000) } } sing() } }) So each second, for four seconds, this stream will send out the lyrics “Ground control to major Tom.” Slick! Because this stream will be used in the body of the Response, the connection will remain open for four seconds until the response completes. But the front end will have access to each chunk of data as it arrives, rather than waiting the full four seconds. This doesn’t speed up the total response time (in some cases, streams can increase response times), but it does allow for a faster-perceived response, and that makes a better user experience. Here’s what my code looks like: /** @type {import('@builder.io/qwik-city').RequestHandler} */ export const onPost: RequestHandler = async (requestEvent) => { const stream = new ReadableStream({ start(controller) { const lyrics = ['Ground', ' control', ' to major', ' Tom.'] function sing() { const lyric = lyrics.shift() controller.enqueue(lyric) if (lyrics.length < 1) { controller.close() } else { setTimeout(sing, 1000) } } sing() } }) requestEvent.send(new Response(stream)) } Unfortunately, as it stands right now, the client will still be waiting four seconds before seeing the entire response, and that’s because we weren’t expecting a streamed response. Let’s fix that. Enable Streaming on the Client Even when dealing with streams, the default browser behavior when receiving a response is to wait for it to complete. In order to get the behavior we want, we’ll need to use client-side JavaScript to make the request and process the streaming body of the response. We’ve already tackled that first part inside our handleSubmit function. Let’s start processing that response body. We can access the ReadableStream from the response body’s getReader() method. This stream will have its own read() method that we can use to access the next chunk of data, as well as the information if the response is done streaming or not. The only "gotcha" is that the data in each chunk doesn’t come in as text: it comes in as a Uint8Array, which is “an array of 8-bit unsigned integers.” It’s basically the representation of the binary data, and you don’t really need to understand any deeper than that unless you want to sound very smart at a party (or boring). The important thing to understand is that on their own, these data chunks aren’t very useful. To get something we can use, we’ll need to decode each chunk of data using a TextDecoder. Ok, that’s a lot of theory. Let’s break down the logic and then look at some code. When we get the response back, we need to: Grab the reader from the response body using response.body.getReader(). Setup a decoder using TextDecoder and a variable to track the streaming status. Process each chunk until the stream is complete, with a whileloop that does this: Grab the next chunk’s data and stream status. Decode the data and use it to update our app’s state.text. Update the streaming status variable, terminating the loop when complete. Update the loading state of the app by setting state.isLoading to false. The new handleSubmit function should look something like this: const handleSubmit = $(async (event) => { state.text = '' state.isLoading = true /** @type {HTMLFormElement} */ const form = event.target const response = await jsFormSubmit(form) // Parse streaming body const reader = response.body.getReader() const decoder = new TextDecoder() let isStillStreaming = true while(isStillStreaming) { const {value, done} = await reader.read() const chunkValue = decoder.decode(value) state.text += chunkValue isStillStreaming = !done } state.isLoading = false }) Now, when I submit the form, I see something like: “Groundcontrolto majorTom.” Hell yeah!!! OK, most of the work is down. Now we just need to replace our demo stream with the OpenAI response. Stream OpenAI Response Looking back at our original implementation, the first thing we need to do is modify the request to OpenAI to let them know that we would like a streaming response. We can do that by setting the stream property in the fetch payload to true. const body = { model: 'gpt-3.5-turbo', messages: [{ role: 'user', content: prompt }], stream: true } const response = await fetch('https://api.openai.com/v1/chat/completions', { method: 'post', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${OPENAI_API_KEY}`, }, body: JSON.stringify(body) }) UPDATE 11/15/2023: I used fetch and custom streams because at the time of writing, the openai module on NPM did not properly support streaming responses. That issue has been fixed, and I think a better solution would be to use that module and pipe their data through a TransformStream to send to the client. That version is not reflected here. Next, we could pipe the response from OpenAI directly to the client, but we might not want to do that. The data they send doesn’t really align with what we want to send to the client because it looks like this (two chunks, one with data, and one representing the end of the stream): data: {"id":"chatcmpl-4bJZRnslkje3289REHFEH9ej2","object":"chat.completion.chunk","created":1690319476,"model":"gpt-3.5-turbo-0613","choiced":[{"index":0,"delta":{"content":"Because"},"finish_reason":"stop"}]} data: [DONE] Instead, what we’ll do is create our own stream, similar to the David Bowie lyrics, that will do some setup, enqueue chunks of data into the stream, and close the stream. Let’s start with an outline: const stream = new ReadableStream({ async start(controller) { // Any setup before streaming // Send chunks of data // Close stream } }) Since we’re dealing with a streaming fetch response from OpenAI, a lot of the work we need to do here can actually be copied from the client-side stream handling. This part should look familiar: const reader = response.body.getReader() const decoder = new TextDecoder() let isStillStreaming = true while(isStillStreaming) { const {value, done} = await reader.read() const chunkValue = decoder.decode(value) // Here's where things will be different isStillStreaming = !done } This snippet was taken almost directly from the frontend stream processing example. The only difference is that we need to treat the data coming from OpenAI slightly differently. As we say, the chunks of data they send up will look something like "data: [JSON data or done]". Another gotcha is that every once in a while, they’ll actually slip in TWO of these data strings in a single streaming chunk. So here’s what I came up with for processing the data. Create a Regular Expression to grab the rest of the string after "data:". For the unlikely event there is more than one data string, use a while loop to process every match in the string. If the current matches the closing condition (“[DONE]“) close the stream. Otherwise, parse the data as JSON and enqueue the first piece of text from the list of options (json.choices[0].delta.content). Fall back to an empty string if none is present. Lastly, in order to move to the next match, if there is one, we can use RegExp.exec(). The logic is quite abstract without looking at the code, so here’s what the whole stream looks like now: const stream = new ReadableStream({ async start(controller) { // Do work before streaming const reader = response.body.getReader() const decoder = new TextDecoder() let isStillStreaming = true while(isStillStreaming) { const {value, done} = await reader.read() const chunkValue = decoder.decode(value) /** * Captures any string after the text `data: ` * @see https://regex101.com/r/R4QgmZ/1 */ const regex = /data:\s*(.*)/g let match = regex.exec(chunkValue) while (match !== null) { const payload = match[1] // Close stream if (payload === '[DONE]') { controller.close() break } else { try { const json = JSON.parse(payload) const text = json.choices[0].delta.content || '' // Send chunk of data controller.enqueue(text) match = regex.exec(chunkValue) } catch (error) { const nextChunk = await reader.read() const nextChunkValue = decoder.decode(nextChunk.value) match = regex.exec(chunkValue + nextChunkValue) } } } isStillStreaming = !done } } }) UPDATE 11/15/2023: I discovered that OpenAI API sometimes returns the JSON payload across two streams. So the solution is to use a try/catch block around the JSON.parse and in the case that it fails, reassign the match variable to the current chunk value plus the next chunk value. The code above has the updated snippet. Review That should be everything we need to get streaming working. Hopefully, it all makes sense and you got it working on your end. I think it’s a good idea to review the flow to make sure we’ve got it: The user submits the form, which gets intercepted and sent with JavaScript. This is necessary to process the stream when it returns. The request is received by the action handler which forwards the data to the OpenAI API along with the setting to return the response as a stream. The OpenAI response will be sent back as a stream of chunks, some of which contain JSON and the last one being “[DONE]“. Instead of passing the stream to the action response, we create a new stream to use in the response. Inside this stream, we process each chunk of data from the OpenAI response and convert it to something more useful before enqueuing it for the action response stream. When the OpenAI stream closes, we also close our action stream. The JavaScript handler on the client side will also process each chunk of data as it comes in and update the UI accordingly. Conclusion The app is working. It’s pretty cool. We covered a lot of interesting things today. Streams are very powerful, but also challenging and, especially when working within Qwik, there are a couple of little gotchas. However, because we focused on low-level fundamentals, these concepts should apply across any framework. As long as you have access to the platform and primitives like streams, requests, and response objects then this should work. That’s the beauty of fundamentals. I think we got a pretty decent application going now. The only problem is right now we’re using a generic text input and asking users to fill in the entire prompt themselves. In fact, they can put in whatever they want. We’ll want to fix that in a future post, but the next post is going to step away from code and focus on understanding how the AI tools actually work. I hope you’ve been enjoying this series and come back for the rest of it. Thank you so much for reading.
So I've been working on a project for a while to create a real-time, high-performance JavaScript Chart Library. This project uses quite an ambitious & novel tech stack including a large legacy codebase in C/C++ which is compiled to WebAssembly using Emscripten, targetting WebGL, and a TypeScript API wrapper allowing you to load the charts in JS without having to worry about the underlying Wasm. First Up, Why Use Wasm at All? WebAssembly is an exciting technology and offers performance benefits over JavaScript in many cases. Also, in this case, a legacy C++ codebase already handled much of the rendering for charts & graphs in OpenGL and needed only a little work to be able to target WebGL. It's fairly easy to compile existing C++ code into WebAssembly using Emscripten and all that remains is writing bindings to generate Typings and then your JavaScript API around the Wasm library to use it. During the development of the library we learned some interesting things about the WebAssembly memory model, how to avoid and debug memory leaks which I'll share below. JavaScript vs. WebAssembly Memory Model WebAssembly has a completely different memory model to JavaScript. While JavaScript has a garbage collector, which automatically cleans up the memory of variables that are no longer required, WebAssembly simply does not. An object or buffer declared in Wasm memory must be deleted by the caller, if not a memory leak will occur. How Memory Leaks Are Caused in JavaScript Memory leaks can occur in both JavaScript and WebAssembly and care and attention must be taken by the developer to ensure that memory is correctly cleaned up when using WebAssembly. Despite being a Garbage-Collected managed programming language, it’s still extremely easy to create a memory leak just in vanilla JavaScript. Here are a couple of ways that is possible to inadvertently leak memory in a JavaScript app: Arrow functions and closure can capture variables and keep them alive, so they cannot be deleted by the JavaScript garbage collector Callbacks or event listeners can capture a variable and keep it alive. Global variables or static variables stay alive for the lifetime of the application. Simply forgetting to use let or const can convert a variable to a global variable. Even detached DOM nodes can keep objects alive in JavaScript. Simply removing a node from the DOM but keeping a variable to it can prevent the node and its children from being collected. How Memory Leaks Are Caused in WebAssembly Wasm has a separate heap from the JavaScript virtual machine. This memory is allocated in the browser, and reserved from the host OS. When you allocate memory in Wasm, the Wasm heap is grown, and a range of addresses are reserved. When you delete memory in Wasm, the heap does not shrink and memory is not returned to the host OS. Instead, the memory is simply marked as deleted or available. This means it can be re-used by future allocations. To cause a memory leak in WebAssembly you simply need to allocate memory and forget to delete it. Since there is no automatic garbage collection, finalization, or marking of memory as no longer needed, it must come from the user. All WebAssembly types exported by the compiler Emscripten have a function .delete() on objects that use Wasm memory. This needs to be called when the object is no longer required. Here's a quick example: Example: Leaking Memory in Wasm Assuming you have a type declared in C++ like this: C++ // person.cpp #include <string> class Person { public: // C++ Constructor Person(std::string name, int age) : name(name), age(age) {} // C++ Destructoe ~Person() {} std::string getName() { return name; } int getAge() { return age; } private: std::string name; int age; }; And compile and export the type using Emscripten like this CMake emcc person.cpp -o person.js -s EXPORTED_FUNCTIONS="['_createPerson', '_deletePerson', '_getName', '_getAge']" -s MODULARIZE=1 You can now instantiate, use, and delete the type in JavaScript like this: JavaScript const Module = require('./person.js'); // Include the generated JavaScript interface Module.onRuntimeInitialized = () => { // Instantiate a Person object const person = new Module.Person('John Doe', 30); console.log('Person object created:', person); // Access and print properties console.log('Name:', person.getName()); console.log('Age:', person.getAge()); // Delete the Person object (calls the C++ destructor) person.delete(); }; Forgetting to call, however, causes a Wasm memory leak. The memory in the browser will grow and not shrink. Detecting Memory Leaks in WebAssembly Applications Because a memory leak is catastrophic to an application, we had to ensure that our code did not leak memory, but also the user code (those consuming and using our [JavaScript Chart Library](https://www.scichart.com/javascript-chart-features) in their applications) did not leak memory. To solve this we developed our in-house memory debugging tools. This is implemented as an object registry which is a Map<string, TObjectEntryInfo> of all objects undeleted and uncollected where TObjectEntryInfo is a type which stores WeakRef to the object. Using a JavaScript proxy technique we were able to intercept calls to new/delete on all WebAssembly types. Each time an object was instantiated, we added it to the objectRegistry and each time it was deleted, we removed it from the objectRegistry. Now you can run your application, enable the memory debugging tools, and output specific snapshots of your application state. Here's an example of the tool's output. First, enable the MemoryUsageHelper (memory debugging tools) JavaScript import { MemoryUsageHelper} from "scichart"; MemoryUsageHelper.isMemoryUsageDebugEnabled = true; This automatically tracks all the types in our library, but you can track any arbitrary object in your application by calling register and unregister like this: JavaScript // Register an arbitrary object MemoryUsageHelper.register(yourObject, "identifier"); // Unregister an arbitrary object MemoryUsageHelper.unregister("identifier"); Later, at a specific point output a snapshot by calling this function: JavaScript MemoryUsageHelper.objectRegistry.log(); This outputs to the console all the objects which have not been deleted, or uncollected How To Use This Output Objects that are in the undeletedObjectsMap may either be still alive or perhaps you've forgotten to delete them. In this case, the resolution is simple. Call .delete() on the object when you are done with it. Objects in uncollectedObjectsMap have not yet been garbage collected. This could be a traditional JS memory leak (which also affects Wasm memory) so check for global variables, closure, callbacks, and other causes of traditional JS memory leaks. Objects in deletedNotCollected and collectedNotDeleted identify possible leaks where an object was collected by the javascript garbage collector but not deleted (and vice versa). MemoryUsageHelper Wasm Memory leak debugging tools are part of SciChart.js, available on npm with a free community license.It can be used in WebAssembly applications or JavaScript applications to track memory usage.
Welcome to the exciting world of React Redux, a game-changing JavaScript library designed to manage application state efficiently. Familiarity and proficiency with React Redux have become essential for many contemporary web developers, given its integral role in creating robust, performant applications. This article unravels the mechanisms and principles of React Redux, exploring its origins and its crucial role in enhancing JavaScript applications. The discussions extend from introducing the fundamentals to disbursing the intricacies of the Redux Store, Actions, Reducers, and Middlewares. Embark on this informative expedition to comprehend how React Redux serves as an invaluable toolset for building dynamic, user-interactive interfaces. Fundamentals of React Redux Understanding the Power of React Redux in Today’s Tech Landscape The pace of technology evolution is breathtaking, with new frameworks and libraries launching every day that completely transform the developer landscape. One such technology, a combination of two open-source JavaScript libraries known as React Redux, has unequivocally become the bellwether in state management solutions for modern web applications. React was initially released by Facebook in 2013 and provides a base framework for developers to build complex and interactive user interfaces. Although powerful in terms of interface development, it doesn’t include any built-in architecture to handle the application state. Enter Redux, offering the missing piece in the puzzle and significantly enhancing React’s capabilities by managing application state at scale and seamlessly integrating with it. Redux was inspired by Facebook’s Flux and functional programming language Elm, created to manage state in a more predictable manner. State refers to persisting data that dictates the behavior of an app at any given point. Redux stores the entire app’s state in an immutable tree, which makes it much easier to manage, track, and manipulate in large applications. Redux ensures simplicity, predictability, and consistency in working with data. The libraries adopt unidirectional data flow, meaning the data maintains a one-way stream, reducing the complexity of tracking changes in large-scale apps and making debugging a less daunting task. However, it’s crucial to note that Redux isn’t for every project. Its value comes to the fore when dealing with considerable state management, avoiding unneeded complexity in smaller applications. React Redux combines the robust interface development of React and the state management prowess of Redux, simplifying the process of building complex apps. Their union allows the use of functional programming inside a JavaScript app, where React handles the view, and Redux manages the data. Get best out of React Redux through its ecosystem and libraries such as Redux Toolkit and Redux Saga. The Redux Toolkit simplifies Redux usage with utilities to reduce boilerplate code, and Redux Saga manages side effects in a better and readable manner. The secret to why React Redux thrives in the tech world lies in its maintainability, scalability, and developer experience. Centralized and predictable state management opens the door to powerful developer tools, async logic handling, breaking down UI into easily testable parts, and caching of data. These features have attracted a vast community of developers and organizations, nurturing its growth and development. React Redux sharpens the edge of tech developments through quick prototyping, enhanced performance, and easing the load of dealing with complex state manipulations. In a dynamic tech environment, it shines as a reliable, scalable, and efficient choice for developers worldwide. As technological advancements show no sign of slowing, understanding tools like React Redux becomes critical, and harnessing its potential will maintain a productive and efficient development flow. To any tech enthusiast devoted to solutions that automate and maximize productivity, this should sound like music to the ears! Unquestionably, React Redux plays an essential role in understanding how today’s technology ecosystems interact and function. Understanding Redux Store Branching out from the comprehensive understanding of React and Redux, let’s delve into the specifics of Redux Store and its pivotal role in web application development. It’s not an overstatement to say that Redux Store is the beating heart of every Redux application. It houses the entire state of the application, and understanding how to manage it is paramount to mastering Redux. Redux Store is effectively the state container; it’s where the state of your application stays, and all changes flow through. No doubt, this centralized store holds immense importance, but there’s something more compelling about it – Redux Store is read-only. Yes, you read it right! The state cannot be directly mutated. This strict read-only pattern ensures predictability by imposing a straightforward data flow and making state updates traceable and easy to comprehend. One might wonder, if not by direct mutation, how does the state update happen in a Redux Store? This is where the power of actions and reducers steps in. The only method to trigger state changes is to dispatch an action – an object describing what happened. To specify how the state tree transforms in light of these actions, reducers are designated. Reducers are pure functions that compute the new state based on the previous state and the action dispatched. Redux Store leverages three fundamental functions: dispatch(), getState(), and subscribe(). Dispatch() method dispenses actions to the store. getState() retrieves the current state of the Redux Store. Subscribe() registers a callback function that the Redux Store will call any time an action has been dispatched to ensure updates in UI components. What makes Redux store a real game-changer is its contribution to predictability and debugging ease. The immutability premise ensures every state change leaves a trace, enabling the usage of tools like the Redux DevTools for creating user action logs. Think of it like a CCTV system for your state changes; you can literally see where, when, and how your state changed. This is a huge selling point for developers working as a team on large-scale applications. Moreover, it’s hard not to mention how Redux Store impacts the scalability factor. In a large-scale application with multiple components, direct state management can turn into a nightmare. Redux Store acts as the single source of truth which simplifies the communication between components and further brings in structure and organization. This makes your application highly scalable, maintainable and equally important, more robust towards bugs. In conclusion, the Redux Store absolutely embodies the essence of Redux. It brings out the predictability, maintainability, and ease of debugging in your applications. Having a solid understanding of Redux Store transfers you into the dominant quadrant in the tech devotee’s arena, adequately preparing you for the complexities involved in high-scale application development. Remember, mastery of modern technologies like Redux brings you one step closer to the goal of a flawless user experience. And isn’t that what we all aim for? Action and Reducers in Redux Diving into the heart of Redux, we’ll now explore the key players that bring Redux to life – Actions and Reducers. If you’re keen on optimizing user interface and improvising data flow in your projects, understanding these two pillars of Redux can unlock possibilities for more efficient and interactive web applications. In Redux, Actions are payloads of information that send data from your application to the Redux Store. They play an integral role in triggering changes to the application’s state. A defining feature of Actions is that they are the only source of information for the Store. As they must be plain objects, it enables consistency, promoting easier testing and improved debugging procedures. Every action carries with them the ‘type’ property, which defines the nature or intent of the action. The type property drives the workflow and helps the Redux Store determine what transformations or updates are needed. More complex Actions might also include a ‘payload’ field, carrying additional information for the state change. Transitioning now to Reducers, they are the fundamental building blocks that define how state transitions happen in a Redux application. They take in the current state and an action and return to the new state, thus forming the core of Redux. It’s crucial to note that Reducers are pure functions, implying the output solely depends on the state and action input, and no side effects like network or database calls are executed. In practice, developers often split a single monolithic Reducer into smaller Reducer functions, each handling separate data slices. It boosts maintainability by keeping functions small and aids in better organization by grouping similar tasks together. The operational flow between Actions and Reducers is thus: an Action describes a change, and a Reducer takes in that action and evolves the state accordingly. The dispatcher function ties in this handshake by effectively bridging Actions and Reducers. A dispatched action is sent to all the Reducers in the Store, and based on the action’s type, the appropriate state change occurs. To conclude, Actions and Reducers are the conduits that power the state change in Redux. These two work conjointly, transforming applications into predictable, testable, and easily debuggable systems. They ensure that React Redux remains an indispensable tool for efficient web application development in the modern tech space. Mastering these components unlocks the potential of Redux, making it easier to scale, maintain, and enhance your applications. React Redux Middlewares Transitioning next towards the concept of middlewares in the context of Redux, inherently, a middleware in Redux context serves as a middleman between the dispatching of an action and the moment it reaches the reducer. Middlewares open a new horizon of possibilities when we need to deal with asynchronous actions and provide a convenient spot to put logics that don’t necessarily belong inside a component or even a reducer. Middleware provides a third-party extension point between dispatching an action and the moment it reaches the reducer, setting the stage for monitoring, logging, and intercepting dispatched actions before they hit a reducer. Redux has a built-in applyMiddleware function that we can use when creating our store to bind middleware to it. One of the most common use-cases for middleware is to support asynchronous interactions. Whereas actions need to be plain objects, and reducers only care about the previous and next state, a middleware can interpret actions with a different format, such as functions or promises, time-traveling, crash-reporting, and more. Applied primarily for handling asynchronous actions or for side-effects (API calls), Redux middleware libraries like Redux Thunk and Redux Saga lead the way here. Redux Thunk, for instance, allows you to write action creators that return a function rather than an action, extending the functionality of the Redux dispatch function. When this function gets dispatched, it’s Redux Thunk middleware that notifies Redux to hold up until the called API methods resolve before it gets to the reducer. On the other hand, Redux Saga exploits the ES6 feature generator functions to make asynchronous flows more manageable and efficient. It accomplishes this by pausing the Generator function and executing the async operation; once the async operation is completed, resume the generator function with received data. There is no denying that middleware is the driving force in making APIs work seamlessly with Redux. They can be thought of as an assembly line that prepares the action to get processed by a reducer. They take care of the nitty-gritty details like ordering the way multiple middlewares are applied or how to deal with async operations, ensuring that the Reducers stay pure by only being concerned with calculating the next state. In conclusion, React Redux and its arsenal, including Middleware, make web development a smooth ride. The introduction of middleware as a third-party extension bridging the gap between dispatching an action and the moment it hits the reducer has opened a new vista of opportunities for dealing with complex scenarios in a clean and effective manner. Actions, reducers, and middlewares —together they form a harmonious trinity that powers high-scale, seamless web development. Building Applications With React Redux Continuing on the journey of dissecting the best practices in React Redux application, let’s now delve into the world of ‘selectors.’ What does a selector do? Simply put, selectors are pure functions that extract and compute derived data from the Redux store state. In the Redux ecosystem, selectors are leveraged to encapsulate the state structure and add a protective shield, abstaining other parts of the app from knowing the intricate details. Selectors come in handy in numerous ways. Notably, they shine in enhancing the maintainability of React Redux applications, especially as they evolve and expand over time. As the scope of the application grows, it becomes necessary to reorganize the state shape – which selectors make less daunting. With selectors, achieving this change won’t require editing other parts of the codebase – a win for maintainability. Consider selectors as the ‘knowledge-bearers’ of state layout. It lends them the power to retrieve anything from the Redux state and perform computations and preparations to satisfy components’ requirements. Therefore, selectors are a key component in managing state in Redux applications and preventing needless renders, ultimately optimizing performance. Next on our voyage, consider the ‘Immutable Update Patterns.’ They are best practices for updating state in Redux applications. As Redux relies on immutability to function correctly, following these patterns becomes vital. With a focus on direct data manipulation, the patterns help keep state consistent while keeping the code organized and readable. One important pattern involves making updates in arrays. The use of array spread syntax (…), map, filter, and other array methods enables adhering to immutability when updating arrays. Another relates to updating objects where object spread syntax is commonly employed. Distinct patterns target adding, updating, and removing items in arrays. Familiarizing oneself with these patterns can streamline React Redux development, leading to cleaner and better-structured code. Lastly, let’s touch upon ‘Connecting React and Redux.’ The React Redux library facilitates this connection via two primary methods – ‘Provider’ and ‘connect.’ With ‘Provider,’ the Redux store becomes accessible to the rest of the app. It employs the Context API under the hood to make this happen. Meanwhile, ‘connect’ caters to the process of making individual components ‘aware’ of the Redux store. It fetches the necessary state values from the store, dispatches actions to the store, and injects these as props into the components. Therefore, the ‘connect’ function fosters the interaction between React components and the Redux Store, helping to automate state management effectively. React and Redux prove to be a formidable combination in creating dynamic web applications. From state management to the convenience of selectors, immutable update patterns, middleware, Redux Store, actions, reducers, and the ability to seamlessly connect React with Redux – the use of React Redux indeed brings a compelling capacity to streamline web application development. It underlines the central role technology plays in problem-solving, especially where efficiency, scalability, and maintainability are crucial. By mastering these concepts, web developers can find their React Redux journey smoother than ever before. Having delved deep into the world of React Redux, we now understand the impact it has on streamlining complex codes and boosting application efficiency. From the innovative concept of a Redux Store holding the application state to the dance of actions and reducers that update these states, React Redux revolutionizes state management. We’ve also gleaned the power of middleware functions, which are crucial in dealing with asynchronous actions and managing logs. Finally, all these theoretical insights have culminated in the real-world implementation of building applications with this versatile JavaScript library. It’s clear that when it comes to state management in web application development, React Redux stands as a robust, go-to solution. Here’s to our continued exploration of technology as we chart new pathways, further deepening our understanding and skill in application development.
Learn to build an efficient and proper image storage system with Node.js and MongoDB. Efficiently manage, upload, retrieve, and display images for various applications. Keyword: efficient. Introduction Images have become crucial in numerous fields and sectors in the digital era. Reliable view preservation and access are vital for a smooth user journey in content administration systems, social networking, online commerce, and a variety of related applications. A NoSQL database called MongoDB and the well-known JavaScript engine Node.js can work well together to create a clever picture repository. You will examine the design and build of a Node.js API for smart picture archiving using MongoDB as the backend in the following article. Beyond saving and retrieving pictures, efficient image storage involves adding intellect into the system to execute operations like image categorizing, seeking, and shipment improvement. The flexible schema of MongoDB makes it an outstanding database for this purpose. Its Node.js, which is widely recognized for its speed and scalability, is a great fit for developing the API. When combined, they offer an affordable method for organizing and storing images. Setting up the Environment Set up our working space first prior to going into the code. The computer has both MongoDB and Node.js configured. To develop and test the code, an Integrated Development Environment (IDE) or editor for texts is likewise recommended. If you wish to generate a new Node.js project, browse the project subdirectory you kind: compared. Follow the prompts to create a `package.json` file with the project's metadata. Next, install the necessary dependencies: Certainly! Let's dive into the details of each of these components: Express Express is a favored Node.js web application framework. It is a full-of-function Node.js web application framework that is simple to use and adaptable, offering a wide range of capabilities for both web and mobile applications. Express offers a wide range of characteristics and instruments for managing HTTP requests and responses, routing, middleware management, as well as additional responsibilities, making the development of web applications quicker. Some of Express's salient features are: Routing Express lets you set up routes for your application so you can tell it how to react to various HTTP requests (GET, POST, PUT, DELETE, for example). Middleware A variety of responsibilities, including request processing, error oversight, logging, and authentication, can be carried out by middleware components. You are able to use Express's broad middleware ecosystem in your own application. Template Machines For producing HTML content on the server automatically, you can use engines for templates such as EJS or Pug with Express. JSON Parsing Express enables working with REST-based application programming interfaces by seamlessly parsing incoming data containing JSON. Error Handling Custom error controllers are one of the techniques that Express offers to professionally control difficulties. Because of its basic nature and diversity, Express is used a lot in the Node.js market to develop website hosting and Applications. Mongoose For MongoDB, a NoSQL database, Mongoose is an Object database modeling (ODM) library. Information is stored by MongoDB in a dynamic JSON-like format that is called BSON (Binary JSON). With Mongoose, working with MongoDB has been organized because information models as rules are defined identically to how they work in traditional relational database systems. Schema Definition Mongoose's schema system allows you to define data models and how they are efficiently organized. This represents a few of its key features. This makes it possible for developers to apply specific rules to the information you provide, such as kinds of information and evaluation guidelines. CRUD Operations By allowing simple ways to create, access, modify, and clear entities in MongoDB, MongoDB accelerates database interactions. Middleware Mongoose, like Express, has gate methods the fact that can be employed to issue orders either before or after specific database transactions. Data Validation Mongoose permits you to create rules for your data to make sure it meets the structure you have set. When Mongo is the database option of choice for Nodes.js applications, MongoDB can often be used to provide enhanced organization and put together conversations between developers and MongoDB. Multer For managing file uploads in Node.js applications in Prilient Technologies, Multer is a middleware. While processing and storing files supplied over HTTP forms, notably the upload of files in web-based programs, it is typically utilized combined with Express. Multer delivers alternatives for managing and keeping files, and it optimizes the file upload procedure. Multer's key qualities are: File Upload Handling Multer contains the capability to manage client requests and file uploads and authorize access to the gave-up files on the server. Configuration Multer is capable of being configured to store uploaded files in a certain location, consent to certain file kinds, and allow certain file renaming operations. Middleware Integration File upload features may be easily added to your web apps through Multer's flawless interface with Express. Multer is useful in cases where users must handle user-uploaded files, such as image uploads, document attachments, and more. Project Structure Let's start by creating the project structure. Here is a high-level overview of the structure: JavaScript image-storage-node.js/ │ ├── node_modules/ # Dependencies ├── uploads/ # Image uploads directory ├── app.js # Main application file ├── package.json # Project dependencies and scripts ├── package-lock.json # Dependency versions ├── routes/ # API routes │ ├── images.js # Image-related routes └── models/ # MongoDB schema ├── image.js # Image schema Designing the Image Storage System The various parts of our adaptive picture repository shall be listed below: Express.js and server: This Nodes.js server acts as a protocol end, enabling processing photos and controlling HTTP requests. MongoDB database: Used for conserving info about photos, such as file points, user data and keywords storage device images. Multer middleware: To oversee and archive image transfers to the server itself. Implementing the Node.js API Let's start by implementing the Node.js API. Create a JavaScript file, e.g., `app.js`, and set up the basic structure of your Express.js server. This code puts and creates your Express.js server, opens your local MongoDB database known as "image-storage," and uses Mongoose for setting up the image the schema storage images. The following illustrates the primary traits and regard found in the source code linked above: Express: For establishing a virtual server, import the Express.js framework. App: The Express usage is set up as an instance. CORS: Cross-Origin Resource Sharing middleware that gives access to the API from multiple domains. Body Parse: JSON data parsing middleware for requests. Image Routes: The importation of goods and the routes to handle API endpoints that involve images. app.use: Express the middleware functions to parse JSON data in requests and enable CORS. app.listen: Opens port 3000 for server startup. Handling Image Uploads We plan to use the Multer middleware for dealing with image uploads. Set all a POST route and storage configurations for photos. This code sets 5MB as the maximum file size and sets Multer to store the uploaded files in memory. The photograph's details are saved to MongoDB by the system upon receiving a POST request from the client to `/upload}. Retrieving and Displaying Images Now, let's implement endpoints for retrieving and displaying images. The /images endpoint retrieves a list of image metadata, and the /images/id endpoint fetches and serves the image file. Conclusion We've studied the basic structure and setup of a Node.js and MongoDB-powered effective picture storage system in this blog post. The environment installed, system layout, and implementation of the uploading, you can collect and assign images have been accurate. Still, this is only the beginning of what can be done with an intelligent picture storage device. By including performs like search, picture resizing, and savvy image evaluation, you might enhance it even more. The integration from Node.js and MongoDB delivers an impressive foundation on which to build intelligent and flexible image storage techniques that comply with the diverse requirements of today's use cases.
John Vester
Staff Engineer,
Marqeta
Justin Albano
Software Engineer,
IBM