DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Enterprise AI Trend Report: Gain insights on ethical AI, MLOps, generative AI, large language models, and much more.

2024 Cloud survey: Share your insights on microservices, containers, K8s, CI/CD, and DevOps (+ enter a $750 raffle!) for our Trend Reports.

PostgreSQL: Learn about the open-source RDBMS' advanced capabilities, core components, common commands and functions, and general DBA tasks.

AI Automation Essentials. Check out the latest Refcard on all things AI automation, including model training, data security, and more.

Popular

The latest and popular trending topics on DZone.

Functions of Popular

AI/ML

AI/ML

Artificial intelligence (AI) and machine learning (ML) are two fields that work together to create computer systems capable of perception, recognition, decision-making, and translation. Separately, AI is the ability for a computer system to mimic human intelligence through math and logic, and ML builds off AI by developing methods that "learn" through experience and do not require instruction. In the AI/ML Zone, you'll find resources ranging from tutorials to use cases that will help you navigate this rapidly growing field.

Java

Java

Java is an object-oriented programming language that allows engineers to produce software for multiple platforms. Our resources in this Zone are designed to help engineers with Java program development, Java SDKs, compilers, interpreters, documentation generators, and other tools used to produce a complete application.

JavaScript

JavaScript

JavaScript (JS) is an object-oriented programming language that allows engineers to produce and implement complex features within web browsers. JavaScript is popular because of its versatility and is preferred as the primary choice unless a specific function is needed. In this Zone, we provide resources that cover popular JS frameworks, server applications, supported data types, and other useful topics for a front-end engineer.

Open Source

Open Source

Open source refers to non-proprietary software that allows anyone to modify, enhance, or view the source code behind it. Our resources enable programmers to work or collaborate on projects created by different teams, companies, and organizations.

Latest Refcards and Trend Reports
Refcard #388
Threat Modeling
Threat Modeling
Trend Report
Data Pipelines
Data Pipelines
Refcard #378
Apache Kafka Patterns and Anti-Patterns
Apache Kafka Patterns and Anti-Patterns
Trend Report
Database Systems
Database Systems

DZone's Featured Popular Resources

Neural Network Representations

Neural Network Representations

By Boluwatife Ben-Adeola
Trained neural networks arrive at solutions that achieve superhuman performance on an increasing number of tasks. It would be at least interesting and probably important to understand these solutions. Interesting, in the spirit of curiosity and getting answers to questions like, “Are there human-understandable algorithms that capture how object-detection nets work?”[a] This would add a new modality of use to our relationship with neural nets from just querying for answers (Oracle Models) or sending on tasks (Agent Models) to acquiring an enriched understanding of our world by studying the interpretable internals of these networks’ solutions (Microscope Models). [1] And important in its use in the pursuit of the kinds of standards that we (should?) demand of increasingly powerful systems, such as operational transparency, and guarantees on behavioral bounds. A common example of an idealized capability we could hope for is “lie detection” by monitoring the model’s internal state. [2] Mechanistic interpretability (mech interp) is a subfield of interpretability research that seeks a granular understanding of these networks. One could describe two categories of mech interp inquiry: Representation interpretability: Understanding what a model sees and how it does; i.e., what information have models found important to look for in their inputs and how is this information represented internally? Algorithmic interpretability: Understanding how this information is used for computation across the model to result in some observed outcome Figure 1: “A Conscious Blackbox," the cover graphic for James C. Scott’s Seeing Like a State (1998) This post is concerned with representation interpretability. Structured as an exposition of neural network representation research [b], it discusses various qualities of model representations which range in epistemic confidence from the obvious to the speculative and the merely desired. Notes: I’ll use "Models/Neural Nets" and "Model Components" interchangeably. A model component can be thought of as a layer or some other conceptually meaningful ensemble of layers in a network. Until properly introduced with a technical definition, I use expressions like “input-properties” and “input-qualities” in place of the more colloquially used “feature.” Now, to some foundational hypotheses about neural network representations. Decomposability The representations of inputs to a model are a composition of encodings of discrete information. That is, when a model looks for different qualities in an input, the representation of the input in some component of the model can be described as a combination of its representations of these qualities. This makes (de)composability a corollary of “encoding discrete information”- the model’s ability to represent a fixed set of different qualities as seen in its inputs. Figure 2: A model layer trained on a task that needs it to care about background colors (trained on only blue and red) and center shapes (only circles and triangles) The component has dedicated a different neuron to the input qualities: "background color is composed of red," "background color is composed of blue," "center object is a circle,” and “center object is a triangle.” Consider the alternative: if a model didn't identify any predictive discrete qualities of inputs in the course of training. To do well on a task, the network would have to work like a lookup table with its keys as the bare input pixels (since it can’t glean any discrete properties more interesting than “the ordered set of input pixels”) pointing to unique identifiers. We have a name for this in practice: memorizing. Therefore, saying, “Model components learn to identify useful discrete qualities of inputs and compose them to get internal representations used for downstream computation,” is not far off from saying “Sometimes, neural nets don’t completely memorize.” Figure 3: An example of how learning discrete input qualities affords generalization or robustness This example test input, not seen in training, has a representation expressed in the learned qualities. While the model might not fully appreciate what “purple” is, it’ll be better off than if it was just trying to do a table lookup for input pixels. Revisiting the hypothesis: "The representations of inputs to a model are a composition of encodings of discrete information." While, as we’ve seen, this verges on the obvious; it provides a template for introducing stricter specifications deserving of study. The first of these specification revisits looks at “…are a composition of encodings…” What is observed, speculated, and hoped for about the nature of these compositions of the encodings? Linearity To recap decomposition, we expect (non-memorizing) neural networks to identify and encode varied information from input qualities/properties. This implies that any activation state is a composition of these encodings. Figure 4: What the decomposability hypothesis suggests What is the nature of this composition? In this context, saying a representation is linear suggests the information of discrete input qualities are encoded as directions in activation space and they are composed into a representation by a vector sum: We’ll investigate both claims. Claim #1: Encoded Qualities Are Directions in Activation Space Composability already suggests that the representation of input in some model components (a vector in activation space) is composed of discrete encodings of input qualities (other vectors in activation space). The additional thing said here is that in a given input-quality encoding, we can think of there being some core essence of the quality which is the vector’s direction. This makes any particular encoding vector just a scaled version of this direction (unit vector.) Figure 5: Various encoding vectors for the red-ness quality in the input They are all just scaled representations of some fundamental red-ness unit vector, which specifies direction. This is simply a generalization of the composability argument that says neural networks can learn to make their encodings of input qualities "intensity"-sensitive by scaling some characteristic unit vector. Alternative Impractical Encoding Regimes Figure 6a An alternative encoding scheme could be that all we can get from models are binary encodings of properties; e.g., “The Red values in this RGB input are Non-zero.” This is clearly not very robust. Figure 6b Another is that we have multiple unique directions for qualities that could be described by mere differences in scale of some more fundamental quality: “One Neuron for "kind-of-red" for 0-127 in the RGB input, another for "really-red" for 128-255 in the RGB input.” We’d run out of directions fairly quickly. Claim #2: These Encodings Are Composed as a Vector Sum Now, this is the stronger of the two claims as it is not necessarily a consequence of anything introduced thus far. Figure 7: An example of 2-property representation Note: We assume independence between properties, ignoring the degenerate case where a size of zero implies the color is not red (nothing). A vector sum might seem like the natural (if not only) thing a network could do to combine these encoding vectors. To appreciate why this claim is worth verifying, it’ll be worth investigating if alternative non-linear functions could also get the job done. Recall that the thing we want is a function that combines these encodings at some component in the model in a way that preserves information for downstream computation. So this is effectively an information compression problem. As discussed in Elhage et al [3a], the following non-linear compression scheme could get the job done: Where we seek to compress values x and y into t. The value of Z is chosen according to the required floating-point precision needed for compressions. Python # A Python Implementation from math import floor def compress_values(x1, x2, precision=1): z = 10 ** precision compressed_val = (floor(z * x1) + x2) / z return round(compressed_val, precision * 2) def recover_values(compressed_val, precision=1): z = 10 ** precision x2_recovered = (compressed_val * z) - floor(compressed_val * z) x1_recovered = compressed_val - (x2_recovered / z) return round(x1_recovered, precision), round(x2_recovered, precision) # Now to compress vectors a and b a = [0.3, 0.6] b = [0.2, 0.8] compressed_a_b = [compress_values(a[0], b[0]), compress_values(a[1], b[1])] # Returned [0.32, 0.68] recovered_a, recovered_b = ( [x, y] for x, y in zip( recover_values(compressed_a_b[0]), recover_values(compressed_a_b[1]) ) ) # Returned ([0.3, 0.6], [0.2, 0.8]) assert all([recovered_a == a, recovered_b == b]) As demonstrated, we’re able to compress and recover vectors a and b just fine, so this is also a viable way of compressing information for later computation using non-linearities like the floor() function that neural networks can approximate. While this seems a little more tedious than just adding vectors, it shows the network does have options. This calls for some evidence and further arguments in support of linearity. Evidence of Linearity The often-cited example of a model component exhibiting strong linearity is the embedding layer in language models [4], where relationships like the following exist between representations of words: This example would hint at the following relationship between the quality of $plurality$ in the input words and the rest of their representation: Okay, so that’s some evidence for one component in a type of neural network having linear representations. The broad outline of arguments for this being prevalent across networks is that linear representations are both the more natural and performant [3b][3a] option for neural networks to settle on. How Important for Interpretability Is It That This Is True? If non-linear compression is prevalent across networks, there are two alternative regimes in which networks could operate: Computation is still mostly done on linear variables: In this regime, while the information is encoded and moved between components non-linearly, the model components would still decompress the representations to run linear computations. From an interpretability standpoint, while this needs some additional work to reverse engineer the decompression operation, this wouldn't pose too high a barrier.Figure 8:Non-linear compression and propagation intervened by linear computation Computation is done in a non-linear state: The model figures out a way to do computations directly on the non-linear representation. This would pose a challenge needing new interpretability methods. However, based on arguments discussed earlier about model architecture affordances this is expected to be unlikely. Figure 9: Direct non-linear computation Features As promised in the introduction, after avoiding the word “feature” this far into the post, we’ll introduce it properly. As a quick aside, I think the engagement of the research community on the topic of defining what we mean when we use the word “feature” is one of the things that makes mech interp, as a pre-paradigmatic science, exciting. While different definitions have been proposed [3c] and the final verdict is by no means out, in this post and others to come on mech interp, I’ll be using the following: "The features of a given neural network constitute a set of all the input qualities the network would dedicate a neuron to if it could." We’ve already discussed the idea of networks necessarily encoding discrete qualities of inputs, so the most interesting part of the definition is, “...would dedicate a neuron to if it could.” What Is Meant by “...Dedicate a Neuron To...”? In a case where all quality-encoding directions are unique one-hot vectors in activation space ([0, 1] and [1, 0], for example) the neurons are said to be basis-aligned; i.e., one neuron’s activation in the network independently represents the intensity of one input quality. Figure 10: Example of a representation with basis-aligned neurons Note that while sufficient, this property is not necessary for lossless compression of encodings with vector addition. The core requirement is that these feature directions be orthogonal. The reason for this is the same as when we explored the non-linear compression method: we want to completely recover each encoded feature downstream. Basis Vectors Following the Linearity hypothesis, we expect the activation vector to be a sum of all the scaled feature directions: Given an activation vector (which is what we can directly observe when our network fires), if we want to know the activation intensity of some feature in the input, all we need is the feature’s unit vector, feature^j_d: (where the character “.” in the following expression is the vector dot product.) If all the feature unit vectors of that network component (making up the set, Features_d) are orthogonal to each other: And, for any vector: These simplify our equation to give an expression for our feature intensity feature^j_i: Allowing us to fully recover our compressed feature: All that was to establish the ideal property of orthogonality between feature directions. This means even though the idea of “one neuron firing by x-much == one feature is present by x-much” is pretty convenient to think about, there are other equally performant feature directions that don’t have their neuron-firing patterns aligning this cleanly with feature patterns. (As an aside, it turns out basis-aligned neurons don’t happen that often. [3d]) Fig 11: Orthogonal feature directions from non-basis-aligned neurons With this context, the request: ”dedicate a neuron to…” might seem arbitrarily specific. Perhaps “dedicate an extra orthogonal direction vector” would be sufficient to accommodate an additional quality. But as you probably already know, orthogonal vectors in a space don’t grow on trees. A 2-dimensional space can only have 2 orthogonal vectors at a time, for example. So to make more room, we might need an extra dimension, i.e [X X] -> [X X X] which is tantamount to having an extra neuron dedicated to this feature. How Are These Features Stored in Neural Networks? To touch grass quickly, what does it mean when a model component has learned 3 orthogonal feature directions {[1 0 0], [0 1 0], [0 0 1]} for compressing an input vector [a b c]? To get the compressed activation vector, we expect a series of dot products with each feature direction to get our feature scale. Now we just have to sum up our scaled-up feature directions to get our “compressed” activation state. In this toy example, the features are just the vector values so lossless decompressing gets us what we started with. The question is: what does this look like in a model? The above sequence of transformations of dot products followed by a sum is equivalent to the operations of the deep learning workhorse: matrix multiplication. The earlier sentence, “…a model component has learned 3 orthogonal feature directions,” should have been a giveaway. Models store their learnings in weights, and so our feature vectors are just the rows of this layer’s learned weight matrix, W. Why didn’t I just say the whole time, “Matrix multiplication. End of section.” Because we don’t always have toy problems in the real world. The learned features aren’t always stored in just one set of weights. It could (and usually does) involve an arbitrarily long sequence of linear and non-linear compositions to arrive at some feature direction (but the key insight of decompositional linearity is that this computation can be summarised by a direction used to compose some activation). The promise of linearity we discussed only has to do with how feature representations are composed. For example, some arbitrary vector is more likely to not be hanging around for discovery by just reading one row of a layer’s weight matrix, but the computation to encode that feature is spread across several weights and model components. So we had to address features as arbitrary strange directions in activation space because they often are. This point brings the proposed dichotomy between representation and algorithmic interpretability into question. Back to our working definition of features: "The features of a given neural network constitute a set of all the input qualities the network would dedicate a neuron to if it could." On the Conditional Clause: “…Would Dedicate a Neuron to if It Could...” You can think of this definition of a feature as a bit of a set-up for an introduction to a hypothesis that addresses its counterfactual: What happens when a neural network cannot provide all its input qualities with dedicated neurons? Superposition Thus far, our model has done fine on the task that required it to compress and propagate 2 learned features — “size” and “red-ness” — through a 2-dimensional layer. What happens when a new task requires the compression and propagation of an additional feature like the x-displacement of the center of the square? Figure 12 This shows our network with a new task, requiring it to propagate one more learned property of the input: center x-displacement. We’ve returned to using neuron-aligned bases for convenience. Before we go further with this toy model, it would be worth thinking through if there are analogs of this in large real-world models. Let’s take the large language model GPT2 small [5]. Do you think, if you had all week, you could think of up to 769 useful features of an arbitrary 700-word query that would help predict the next token (e.g., “is a formal letter," “contains how many verbs," “is about about ‘Chinua Achebe,’” etc.)? Even if we ignored the fact that feature discovery was one of the known superpowers of neural networks [c] and assumed GPT2-small would also end up with only 769 useful input features to encode, we’d have a situation much like our toy problem above. This is because GPT2 has —at the narrowest point in its architecture— only 768 neurons to work with, just like our toy problem has 2 neurons but needs to encode information about 3 features. [d] So this whole “model component encodes more features it has neurons” business should be worth looking into. It probably also needs a shorter name. That name is the Superposition hypothesis. Considering the above thought experiment with GPT2 Small, it would seem this hypothesis is just stating the obvious- that models are somehow able to represent more input qualities (features) than they have dimensions for. What Exactly Is Hypothetical About Superposition? There’s a reason I introduced it this late in the post: it depends on other abstractions that aren't necessarily self-evident. The most important is the prior formulation of features. It assumes linear decomposition- the expression of neural net representations as sums of scaled directions representing discrete qualities of their inputs. These definitions might seem circular, but they’re not if defined sequentially: If you conceive of neural networks as encoding discrete information of inputs called Features as directions in activation space, then when we suspect the model has more of these features than it has neurons, we call this Superposition. A Way Forward As we’ve seen, it would be convenient if the features of a model were aligned with neurons and necessary for them to be orthogonal vectors to allow lossless recovery from compressed representations. So to suggest this isn't happening poses difficulties to interpretation and raises questions on how networks can pull this off anyway. Further development of the hypothesis provides a model for thinking about why and how superposition happens, clearly exposes the phenomenon in toy problems, and develops promising methods for working around barriers to interpretability [6]. More on this in a future post. Footnotes [a] That is, algorithms more descriptive than “Take this Neural Net architecture and fill in its weights with these values, then do a forward pass.” [b] Primarily from ideas introduced in Toy Models of Superposition [c] This refers specifically to the codification of features as their superpower. Humans are pretty good at predicting the next token in human text; we’re just not good at writing programs for extracting and representing this information vector space. All of that is hidden away in the mechanics of our cognition. [d] Technically, the number to compare the 768-dimension residual stream width to is the maximum number of features we think *any* single layer would have to deal with at a time. If we assume equal computational workload between layers and assume each batch of features was built based on computations on the previous, for the 12-layer GPT2 model, this would be 12 * 768 = 9,216 features you’d need to think up. References [1] Chris Olah on Mech Interp - 80000 Hours [2] Interpretability Dreams [3] Toy Models of Superposition [3a] Nonlinear Compression [3b] Features as Directions [3c] What are Features? [3d] Definitions and Motivation [4] Linguistic regularities in continuous space word representations: Mikolov, T., Yih, W. and Zweig, G., 2013. Proceedings of the 2013 conference of the North American chapter of the Association for Computational Linguistics: Human language technologies, pp. 746--751. [5] Language Models are Unsupervised Multitask Learners: Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever [6] Towards Monosemanticity: Decomposing Language Models With Dictionary Learning More
Evolution of Privacy-Preserving AI: From Protocols to Practical Implementations

Evolution of Privacy-Preserving AI: From Protocols to Practical Implementations

By Petr Emelianov
Year by year, artificial intelligence evolves and becomes more efficient for solving everyday human tasks. But at the same time, it increases the possibility of personal information misuse, reaching unprecedented levels of power and speed in analyzing and spreading individuals' data. In this article, I would like to take a closer look at the strong connection between artificial intelligence systems and machine learning and their use of increasingly private and sensitive data. Together, we'll explore existing privacy risks, discuss traditional approaches to privacy in machine learning, and analyze ways to overcome security breaches. Importance of Privacy in AI It is no secret that today, AI is extensively used in many spheres, including marketing. NLP, or Natural Language Processing, interprets human language and is used in voice assistants and chatbots, understanding accents and emotions; it links social media content to engagement. Machine learning employs algorithms to analyze data, improve performance, and enable AI to make decisions without human intervention. Deep Learning relies on neural networks and uses extensive datasets for informed choices. These AI types often collaborate, posing challenges to data privacy. AI collects data intentionally, where users provide information, or unintentionally, for instance, through facial recognition. The problem arises when unintentional data collection leads to unexpected uses, compromising privacy. For example, discussing pet food or more intimate purchases around a phone can lead to targeted ads, revealing unintentional data gathering. AI algorithms, while being intelligent, may inadvertently capture information and subject it to unauthorized use. Thus, video doorbells with facial identification intended for family recognition may unintentionally collect data about unrelated individuals, causing neighbors to worry about surveillance and data access. Bearing in mind the above, it is crucially important to establish a framework for ethical decision-making regarding the use of new AI technologies. Addressing privacy challenges and contemplating the ethics of technology is imperative for the enduring success of AI. One of the main reasons for that is that finding a balance between technological innovation and privacy concerns will foster the development of socially responsible AI, contributing to the long-term creation of public value and private security. Traditional Approach Risks Before we proceed with efficient privacy-preserving techniques, let us take a look at traditional approaches and the problems they may face. Traditional approaches to privacy and machine learning are centered mainly around two concepts: user control and data protection. Users want to know who collects their data, for what purpose, and how long it will be stored. Data protection involves anonymized and encrypted data, but even here, the gaps are inevitable, especially in machine learning, where decryption is often necessary. Another issue is that machine learning involves multiple stakeholders, creating a complex web of trust. Trust is crucial when sharing digital assets, such as training data, inference data, and machine learning models across different entities. Just imagine that there is an entity that owns the training data, while another set of entities may own the inference data. The third entity provides a machine learning server running on the inference, performed by a model owned by someone else. Additionally, it operates on infrastructure from an extensive supply chain involving many parties. Due to this, all the entities must demonstrate trust in each other within a complex chain. Managing this web of trust becomes increasingly difficult. Examples of Security Breaches As we rely more on communication technologies using machine learning, the chance of data breaches and unauthorized access goes up. Hackers might try to take advantage of vulnerabilities in these systems to get hold of personal data, such as name, address, and financial information, which can result in fund losses and identity theft. A report on the malicious use of AI outlines three areas of security concern: expansion of existing threats, new attack methods, and changes in the typical character of threats. Examples of malicious AI use include BEC attacks using deepfake technology, contributing to social engineering tactics. AI-assisted cyber-attacks, demonstrated by IBM's DeepLocker, show how AI can enhance ransomware attacks by making decisions based on trends and patterns. Notably, TaskRabbit experienced an AI-assisted cyber-attack, where an AI-enabled botnet executed a DDoS attack, leading to a data breach which affected 3.75 million customers. Moreover, increased online shopping is fueling card-not-present (CNP) fraud, combined with rising synthetic identity and identity theft issues. Predicted losses from it could reach $200 billion by 2024, with transaction volumes rising over 23%. Privacy-Preserving Machine Learning This is when privacy-preserving machine learning comes in with a solution. Among the most effective techniques are federated learning, homomorphic encryption, and differential privacy. Federated learning allows separate entities to collectively train a model without sharing explicit data. In turn, homomorphic encryption enables machine learning on encrypted data throughout the process and differential privacy ensures that calculation outputs cannot be tied to individual data presence. These techniques, combined with trusted execution environments, can effectively address the challenges at the intersection of privacy and machine learning. Privacy Advantages of Federated Learning As you can see, classical machine learning models lack the efficiency to implement AI systems and IoT practices securely when compared to privacy-preserving machine learning techniques, particularly federated learning. Being a decentralized version of machine learning, FL helps make AI security-preserving techniques more reliable. In traditional methods, sensitive user data is sent to centralized servers for training, posing numerous privacy concerns, and federated learning addresses this by allowing models to be trained locally on devices, ensuring user data security. Enhanced Data Privacy and Security Federated learning, with its collaborative nature, treats each IoT device on the edge as a unique client, training models without transmitting raw data. This ensures that during the federated learning process, each IoT device only gathers the necessary information for its task. By keeping raw data on the device and sending only model updates to the central server, federated learning safeguards private information, minimizes the risk of personal data leakage, and ensures secure operations. Improved Data Accuracy and Diversity Another important issue is that centralized data used to train a model may not accurately represent the full spectrum of data that the model will encounter. In contrast, training models on decentralized data from various sources and exposing them to a broader range of information enhances the model's ability to generalize new data, handle variations, and reduce bias. Higher Adaptability One more advantage federated learning models exhibit is a notable capability to adapt to new situations without requiring retraining, which provides extra security and reliability. Using insights from previous experiences, these models can make predictions and apply knowledge gained in one field to another. For instance, if the model becomes more proficient in predicting outcomes in a specific domain, it can seamlessly apply this knowledge to another field, enhancing efficiency, reducing costs, and expediting processes. Encryption Techniques To enhance privacy in FL, even more efficient encryption techniques are often used. Among them are homomorphic encryption and secure multi-party computation. These methods ensure that data stays encrypted and secure during communication and model aggregation. The homomorphic encryption allows computations on encrypted data without decryption. For example, if a user wants to upload data to a cloud-based server, they can encrypt it, turning it into ciphertext, and only after that upload it. The server would then process that data without decrypting it, and then the user would get it back. After that, the user would decrypt it with their secret key. Multi-party computation, or MPC, enables multiple parties, each with their private data, to evaluate a computation without revealing any of the private data held by each party. A multi-party computation protocol ensures both privacy and accuracy. The private information held by the parties cannot be inferred from the execution of the protocol. If any party within the group decides to share information or deviates from the instructions during the protocol execution, the MPC will not allow it to force the other parties to output an incorrect result or leak any private information. Final Considerations Instead of the conclusion, I would like to stress the importance and urgency of embracing advanced security approaches in ML. For effective and long-term outcomes in AI safety and security, there should be coordinated efforts between the AI development community and legal and policy institutions. Building trust and establishing proactive channels for collaboration in developing norms, ethics, standards, and laws is crucial to avoid reactive and potentially ineffective responses from both the technical and policy sectors. I would also like to quote the authors of the report mentioned above, who propose the following recommendations to face security challenges in AI: Policymakers should collaborate closely with technical researchers to explore, prevent, and mitigate potential malicious applications of AI. AI researchers and engineers should recognize the dual-use nature of their work, considering the potential for misuse and allowing such considerations to influence research priorities and norms. They should also proactively engage with relevant stakeholders when harmful applications are foreseeable. Identify best practices from mature research areas, like computer security, and apply them to address dual-use concerns in AI. Actively work towards broadening the involvement of stakeholders and domain experts in discussions addressing these challenges. Hope this article encourages you to investigate the topic on your own, contributing to a more secure digital world. More
NIST AI Risk Management Framework: Developer’s Handbook
NIST AI Risk Management Framework: Developer’s Handbook
By Josephine Eskaline Joyce DZone Core CORE
The Four Pillars of Programming Logic in Software Quality Engineering
The Four Pillars of Programming Logic in Software Quality Engineering
By Stelios Manioudakis DZone Core CORE
Top 5 In-Demand Tech Skills for 2024: A Guide for Career Advancement
Top 5 In-Demand Tech Skills for 2024: A Guide for Career Advancement
By Roopa Kushtagi
Spring Strategy Pattern Example
Spring Strategy Pattern Example

In this example, we'll learn about the Strategy pattern in Spring. We'll cover different ways to inject strategies, starting from a simple list-based approach to a more efficient map-based method. To illustrate the concept, we'll use the three Unforgivable curses from the Harry Potter series — Avada Kedavra, Crucio, and Imperio. What Is the Strategy Pattern? The Strategy Pattern is a design principle that allows you to switch between different algorithms or behaviors at runtime. It helps make your code flexible and adaptable by allowing you to plug in different strategies without changing the core logic of your application. This approach is useful in scenarios where you have different implementations for a specific task of functionality and want to make your system more adaptable to changes. It promotes a more modular code structure by separating the algorithmic details from the main logic of your application. Step 1: Implementing Strategy Picture yourself as a dark wizard who strives to master the power of Unforgivable curses with Spring. Our mission is to implement all three curses — Avada Kedavra, Crucio and Imperio. After that, we will switch between curses (strategies) in runtime. Let's start with our strategy interface: Java public interface CurseStrategy { String useCurse(); String curseName(); } In the next step, we need to implement all Unforgivable curses: Java @Component public class CruciatusCurseStrategy implements CurseStrategy { @Override public String useCurse() { return "Attack with Crucio!"; } @Override public String curseName() { return "Crucio"; } } @Component public class ImperiusCurseStrategy implements CurseStrategy { @Override public String useCurse() { return "Attack with Imperio!"; } @Override public String curseName() { return "Imperio"; } } @Component public class KillingCurseStrategy implements CurseStrategy { @Override public String useCurse() { return "Attack with Avada Kedavra!"; } @Override public String curseName() { return "Avada Kedavra"; } } Step 2: Inject Curses as List Spring brings a touch of magic that allows us to inject multiple implementations of an interface as a List so we can use it to inject strategies and switch between them. But let's first create the foundation: Wizard interface. Java public interface Wizard { String castCurse(String name); } And we can inject our curses (strategies) into the Wizard and filter the desired one. Java @Service public class DarkArtsWizard implements Wizard { private final List<CurseStrategy> curses; public DarkArtsListWizard(List<CurseStrategy> curses) { this.curses = curses; } @Override public String castCurse(String name) { return curses.stream() .filter(s -> name.equals(s.curseName())) .findFirst() .orElseThrow(UnsupportedCurseException::new) .useCurse(); } } UnsupportedCurseException is also created if the requested curse does not exist. Java public class UnsupportedCurseException extends RuntimeException { } And we can verify that curse casting is working: Java @SpringBootTest class DarkArtsWizardTest { @Autowired private DarkArtsWizard wizard; @Test public void castCurseCrucio() { assertEquals("Attack with Crucio!", wizard.castCurse("Crucio")); } @Test public void castCurseImperio() { assertEquals("Attack with Imperio!", wizard.castCurse("Imperio")); } @Test public void castCurseAvadaKedavra() { assertEquals("Attack with Avada Kedavra!", wizard.castCurse("Avada Kedavra")); } @Test public void castCurseExpelliarmus() { assertThrows(UnsupportedCurseException.class, () -> wizard.castCurse("Abrakadabra")); } } Another popular approach is to define the canUse method instead of curseName. This will return boolean and allows us to use more complex filtering like: Java public interface CurseStrategy { String useCurse(); boolean canUse(String name, String wizardType); } @Component public class CruciatusCurseStrategy implements CurseStrategy { @Override public String useCurse() { return "Attack with Crucio!"; } @Override public boolean canUse(String name, String wizardType) { return "Crucio".equals(name) && "Dark".equals(wizardType); } } @Service public class DarkArtstWizard implements Wizard { private final List<CurseStrategy> curses; public DarkArtsListWizard(List<CurseStrategy> curses) { this.curses = curses; } @Override public String castCurse(String name) { return curses.stream() .filter(s -> s.canUse(name, "Dark"))) .findFirst() .orElseThrow(UnsupportedCurseException::new) .useCurse(); } } Pros: Easy to implement. Cons: Runs through a loop every time, which can lead to slower execution times and increased processing overhead. Step 3: Inject Strategies as Map We can easily address the cons from the previous section. Spring lets us inject a Map with bean names and instances. It simplifies the code and improves its efficiency. Java @Service public class DarkArtsWizard implements Wizard { private final Map<String, CurseStrategy> curses; public DarkArtsMapWizard(Map<String, CurseStrategy> curses) { this.curses = curses; } @Override public String castCurse(String name) { CurseStrategy curse = curses.get(name); if (curse == null) { throw new UnsupportedCurseException(); } return curse.useCurse(); } } This approach has a downside: Spring injects the bean name as the key for the Map, so strategy names are the same as the bean names like cruciatusCurseStrategy. This dependency on Spring's internal bean names might cause problems if Spring's code or our class names change without notice. Let's check that we're still capable of casting those curses: Java @SpringBootTest class DarkArtsWizardTest { @Autowired private DarkArtsWizard wizard; @Test public void castCurseCrucio() { assertEquals("Attack with Crucio!", wizard.castCurse("cruciatusCurseStrategy")); } @Test public void castCurseImperio() { assertEquals("Attack with Imperio!", wizard.castCurse("imperiusCurseStrategy")); } @Test public void castCurseAvadaKedavra() { assertEquals("Attack with Avada Kedavra!", wizard.castCurse("killingCurseStrategy")); } @Test public void castCurseExpelliarmus() { assertThrows(UnsupportedCurseException.class, () -> wizard.castCurse("Crucio")); } } Pros: No loops. Cons: Dependency on bean names, which makes the code less maintainable and more prone to errors if names are changed or refactored. Step 4: Inject List and Convert to Map Cons of Map injection can be easily eliminated if we inject List and convert it to Map: Java @Service public class DarkArtsWizard implements Wizard { private final Map<String, CurseStrategy> curses; public DarkArtsMapWizard(List<CurseStrategy> curses) { this.curses = curses.stream() .collect(Collectors.toMap(CurseStrategy::curseName, Function.identity())); } @Override public String castCurse(String name) { CurseStrategy curse = curses.get(name); if (curse == null) { throw new UnsupportedCurseException(); } return curse.useCurse(); } } With this approach, we can move back to use curseName instead of Spring's bean names for Map keys (strategy names). Step 5: @Autowire in Interface Spring supports autowiring into methods. The simple example of autowiring into methods is through setter injection. This feature allows us to use @Autowired in a default method of an interface so we can register each CurseStrategy in the Wizard interface without needing to implement a registration method in every strategy implementation. Let's update the Wizard interface by adding a registerCurse method: Java public interface Wizard { String castCurse(String name); void registerCurse(String curseName, CurseStrategy curse) } This is the Wizard implementation: Java @Service public class DarkArtsWizard implements Wizard { private final Map<String, CurseStrategy> curses = new HashMap<>(); @Override public String castCurse(String name) { CurseStrategy curse = curses.get(name); if (curse == null) { throw new UnsupportedCurseException(); } return curse.useCurse(); } @Override public void registerCurse(String curseName, CurseStrategy curse) { curses.put(curseName, curse); } } Now, let's update the CurseStrategy interface by adding a method with the @Autowired annotation: Java public interface CurseStrategy { String useCurse(); String curseName(); @Autowired default void registerMe(Wizard wizard) { wizard.registerCurse(curseName(), this); } } At the moment of injecting dependencies, we register our curse into the Wizard. Pros: No loops, and no reliance on inner Spring bean names. Cons: No cons, pure dark magic. Conclusion In this article, we explored the Strategy pattern in the context of Spring. We assessed different strategy injection approaches and demonstrated an optimized solution using Spring's capabilities. The full source code for this article can be found on GitHub.

By Max Stepovyi
Exploring Text Generation With Python and GPT-4
Exploring Text Generation With Python and GPT-4

In the rapidly evolving landscape of artificial intelligence, text generation models have emerged as a cornerstone, revolutionizing how we interact with machine learning technologies. Among these models, GPT-4 stands out, showcasing an unprecedented ability to understand and generate human-like text. This article delves into the basics of text generation using GPT-4, providing Python code examples to guide beginners in creating their own AI-driven text generation applications. Understanding GPT-4 GPT-4, or Generative Pre-trained Transformer 4, represents the latest advancement in OpenAI's series of text generation models. It builds on the success of its predecessors by offering more depth and a nuanced understanding of context, making it capable of producing text that closely mimics human writing in various styles and formats. At its core, GPT-4 operates on the principles of deep learning, utilizing a transformer architecture. This architecture enables the model to pay attention to different parts of the input text differently, allowing it to grasp the nuances of language and generate coherent, contextually relevant responses. Getting Started With GPT-4 and Python To experiment with GPT-4, one needs access to OpenAI's API, which provides a straightforward way to utilize the model without the need to train it from scratch. The following Python code snippet demonstrates how to use the OpenAI API to generate text with GPT-4: Python from openai import OpenAI # Set OpenAI API key client = OpenAI(api_key = 'you_api_key_goes_here') #Get your key at https://platform.openai.com/api-keys response = client.chat.completions.create( model="gpt-4-0125-preview", # The Latest GPT-4 model. Trained with data till end of 2023 messages =[{'role':'user', 'content':"Write a short story about a robot saving earth from Aliens."}], max_tokens=250, # Response text length. temperature=0.6, # Ranges from 0 to 2, lower values ==> Determinism, Higher Values ==> Randomness top_p=1, # Ranges 0 to 1. Controls the pool of tokens. Lower ==> Narrower selection of words frequency_penalty=0, # used to discourage the model from repeating the same words or phrases too frequently within the generated text presence_penalty=0) # used to encourage the model to include a diverse range of tokens in the generated text. print(response.choices[0].message.content) In this example, we use the client.chat.completions.create function to generate text. The model parameter specifies which version of the model to use, with "gpt-4-0125-preview" representing the latest GPT-4 preview that is trained with the data available up to Dec 2023. The messages parameter feeds the initial text to the model, serving as the basis for the generated content. Other parameters like max_tokens, temperature, and top_p allow us to control the length and creativity of the output. Applications and Implications The applications of GPT-4 extend far beyond simple text generation. Industries ranging from entertainment to customer service find value in its ability to create compelling narratives, generate informative content, and even converse with users in a natural manner. However, as we integrate these models more deeply into our digital experiences, ethical considerations come to the forefront. Issues such as bias, misinformation, and the potential for misuse necessitate a thoughtful approach to deployment and regulation. Conclusion GPT-4's capabilities represent a significant leap forward in the field of artificial intelligence, offering tools that can understand and generate human-like text with remarkable accuracy. The Python example provided herein serves as a starting point for exploring the vast potential of text generation models. As we continue to push the boundaries of what AI can achieve, it remains crucial to navigate the ethical landscape with care, ensuring that these technologies augment human creativity and knowledge rather than detract from it. In summary, GPT-4 not only showcases the power of modern AI but also invites us to reimagine the future of human-computer interaction. With each advancement, we step closer to a world where machines understand not just the words we say but the meaning and emotion behind them, unlocking new possibilities for creativity, efficiency, and understanding.

By Ashok Gorantla DZone Core CORE
Scaling Java Microservices to Extreme Performance Using NCache
Scaling Java Microservices to Extreme Performance Using NCache

Microservices have emerged as a transformative architectural approach in the realm of software development, offering a paradigm shift from monolithic structures to a more modular and scalable system. At its core, microservices involve breaking down complex applications into smaller, independently deployable services that communicate seamlessly, fostering agility, flexibility, and ease of maintenance. This decentralized approach allows developers to focus on specific functionalities, enabling rapid development, continuous integration, and efficient scaling to meet the demands of modern, dynamic business environments. As organizations increasingly embrace the benefits of microservices, this article explores the key principles, advantages, and challenges associated with this architectural style, shedding light on its pivotal role in shaping the future of software design and deployment. A fundamental characteristic of microservices applications is the ability to design, develop, and deploy each microservice independently, utilizing diverse technology stacks. Each microservice functions as a self-contained, autonomous application with its own dedicated persistent storage, whether it be a relational database, a NoSQL DB, or even a legacy file storage system. This autonomy enables individual microservices to scale independently, facilitating seamless real-time infrastructure adjustments and enhancing overall manageability. NCache Caching Layer in Microservice Architecture In scenarios where application transactions surge, bottlenecks may persist, especially in architectures where microservices store data in non-scalable relational databases. Simply deploying additional instances of the microservice doesn't alleviate the problem. To address these challenges, consider integrating NCache as a distributed cache at the caching layer between microservices and datastores. NCache serves not only as a cache but also functions as a scalable in-memory publisher/subscriber messaging broker, facilitating asynchronous communication between microservices. Microservice Java application performance optimization can be achieved by the cache techniques like Cache item locking, grouping Cache data, Hibernate Caching, SQL Query, data structure, spring data cache technique pub-sub messaging, and many more with NCache. Please check the out-of-the-box features provided by NCache. Using NCache as Hibernate Second Level Java Cache Hibernate First-Level Cache The Hibernate first-level cache serves as a fundamental standalone (in-proc) cache linked to the Session object, limited to the current session. Nonetheless, a drawback of the first-level cache is its inability to share objects between different sessions. If the same object is required by multiple sessions, each triggers a database trip to load it, intensifying database traffic and exacerbating scalability issues. Furthermore, when the session concludes, all cached data is lost, necessitating a fresh fetch from the database upon the next retrieval. Hibernate Second-Level Cache For high-traffic Hibernate applications relying solely on the first-level cache, deployment in a web farm introduces challenges related to cache synchronization across servers. In a web farm setup, each node operates a web server—such as Apache, Oracle WebLogic, etc.—with multiple instances of httpd processes to serve requests. Each Hibernate first-level cache in these HTTP worker processes maintains a distinct version of the same data directly cached from the database, posing synchronization issues. This is why Hibernate offers a second-level cache with a provider model. The Hibernate second-level cache enables you to integrate third-party distributed (out-proc) caching providers to cache objects across sessions and servers. Unlike the first-level cache, the second-level cache is associated with the SessionFactory object and is accessible to the entire application, extending beyond a single session. Enabling the Hibernate second-level cache results in the coexistence of two caches: the first-level cache and the second-level cache. Hibernate endeavors to retrieve objects from the first-level cache first; if unsuccessful, it attempts to fetch them from the second-level cache. If both attempts fail, the objects are directly loaded from the database and cached. This configuration substantially reduces database traffic, as a significant portion of the data is served by the second-level distributed cache. NCache Java has implemented a Hibernate second-level caching provider by extending org.hibernate.cache.CacheProvider. Integrating NCache Java Hibernate distributed caching provider with the Hibernate application requires no code changes. This integration enables you to scale your Hibernate application to multi-server configurations without the database becoming a bottleneck. NCache also delivers enterprise-level distributed caching features, including data size management, data synchronization across servers, and more. To incorporate the NCache Java Hibernate caching provider, a simple modification of your hibernate.cfg.xml and ncache.xml is all that is required. Thus, with the NCache Java Hibernate distributed cache provider, you can achieve linear scalability for your Hibernate applications seamlessly, requiring no alterations to your existing code. Code Snippet Java // Configure Hibernate properties programmatically Properties hibernateProperties = new Properties(); hibernateProperties.put("hibernate.connection.driver_class", "org.h2.Driver"); hibernateProperties.put("hibernate.connection.url", "jdbc:h2:mem:testdb"); hibernateProperties.put("hibernate.show_sql", "false"); hibernateProperties.put("hibernate.hbm2ddl.auto", "create-drop"); hibernateProperties.put("hibernate.cache.use_query_cache", "true"); hibernateProperties.put("hibernate.cache.use_second_level_cache", "true"); hibernateProperties.put("hibernate.cache.region.factory_class", "org.hibernate.cache.jcache.internal.JCacheRegionFactory"); hibernateProperties.put("hibernate.javax.cache.provider", "com.alachisoft.ncache.hibernate.jcache.HibernateNCacheCachingProvider"); // Set other Hibernate properties as needed Configuration configuration = new Configuration() .setProperties(hibernateProperties).addAnnotatedClass(Product.class); Logger.getLogger("org.hibernate").setLevel(Level.OFF); // Build the ServiceRegistry ServiceRegistry serviceRegistry = new StandardServiceRegistryBuilder() .applySettings(configuration.getProperties()).build(); // Build the SessionFactory SessionFactory factory = configuration.buildSessionFactory(serviceRegistry); // Create a List of Product objects ArrayList<Product> products = (ArrayList<Product>) getProducts(); // Open a new Hibernate session to save products to the database. This also caches it try (Session session = factory.openSession()) { Transaction transaction = session.beginTransaction(); // save() method saves products to the database and caches it too System.out.println("ProductID, Name, Price, Category"); for (Product product : products) { System.out.println("- " + product.getProductID() + ", " + product.getName() + ", " + product.getPrice() + ", " + product.getCategory()); session.save(product); } transaction.commit(); System.out.println(); // Now open a new session to fetch products from the DB. // But, these products are actually fetched from the cache try (Session session = factory.openSession()) { List<Product> productList = (List<Product>) session.createQuery("from Product").list(); if (productList != null) { printProductDetails(productList); } } Integrate NCache with Hibernate to effortlessly cache the results of queries. When these objects are subsequently fetched by Hibernate, they are retrieved from the cache, thereby avoiding a costly trip to the database. From the above code sample, the products are saved in the database, and it also caches; now, when the new session opens to fetch the product details, it will fetch from the Cache and avoid an unnecessary database trip. Learn more about Hibernate Caching Scaling With NCache Pub/Sub Messaging NCache is a distributed in-memory caching solution designed for .NET. Its compatibility extends to Java through a native client and third-party integrations, ensuring seamless support for both platforms. NCache serves as an in-memory distributed data store tailored for .NET and Java, offering a feature-rich, in-memory pub/sub mechanism for event-driven communication. This makes it straightforward to set up NCache as a messaging broker, employing the Pub/Sub model for seamless asynchronous communication between microservices. Using NCache In-Memory Pub/Sub for Microservices NCache enables Pub/Sub functionality by establishing a topic where microservices can publish and subscribe to events. These events are published to the NCache message broker outside the microservice. Within each subscribing microservice, there exists an event handler to manage the corresponding event once it has been published by the originating microservice. In the realm of Java microservices, NCache functions as an event bus or message broker, facilitating the relay of messages to one or multiple subscribers. In the context of Pub/Sub models that necessitate a communication channel, NCache serves as a medium for topics. This entails the publisher dispatching messages to the designated topic and subscribers receiving notifications through the same topic. Employing NCache as the medium for topics promotes loose coupling within the model, offering enhanced abstraction and additional advantages for distributed topics. Publish The code snippet below initializes the messageService object using NCache MessagingService package. Initializing the Topic Java // Create a Topic in NCache. MessagingService messagingService = cache.getMessagingService(); Topic topic = messagingService.createTopic(topicName); // Create a thread pool for publishers ExecutorService publisherThreadPool = Executors.newFixedThreadPool(2); The below code snippet used to define register the subscribers to this topic Register subscribers to this Topic MessageReceivedListener subscriptionListener1 = new MessageReceivedListener() { @Override public void onMessageReceived(Object o, MessageEventArgs messageEventArgs) { messageReceivedSubscription1(messageEventArgs.getMessage()); } }; MessageReceivedListener subscriptionListener2 = new MessageReceivedListener() { @Override public void onMessageReceived(Object o, MessageEventArgs messageEventArgs) { messageReceivedSubscription2(messageEventArgs.getMessage()); } }; TopicSubscription subscription1 = topic.createSubscription(subscriptionListener1); TopicSubscription subscription2 = topic.createSubscription(subscriptionListener2); NCache provides two variants of durable subscriptions to cater to the message durability needs within your Java microservices: Shared Durable Subscriptions: This allows multiple subscribers to connect to a single subscription. The Round Robin approach is employed to distribute messages among the various subscribers. Even if a subscriber exits the network, messages persistently flow between the active subscribers. Exclusive Durable Subscriptions: In this type, only one active subscriber is allowed on a subscription at any given time. No new subscriber requests are accepted for the same subscription until the existing connection is active. Learn more Pub/Sub Messaging with NCache implementation here Pub/Sub Messaging in Cache: An Overview SQL Query on Cache NCache provides your microservices with the capability to perform SQL-like queries on indexed cache data. This functionality becomes particularly beneficial when the values of the keys storing the desired information are not known. It abstracts much of the lower-level cache API calls, contributing to clearer and more maintainable application code. This feature is especially advantageous for individuals who find SQL-like commands more intuitive and comfortable to work with. NCache provides functionality for searching and removing cache data through queries similar to SQL's SELECT and DELETE statements. However, operations like INSERT and UPDATE are not available. For executing SELECT queries within the cache, NCache utilizes ExecuteReader; the ExecuteScalar function is used to carry out a query and retrieve the first row's first column from the resulting data set, disregarding any extra columns or rows. For NCache SQL queries to function, indexes must be established on all objects undergoing search. This can be achieved through two methods: configuring the cache or utilizing code with "Custom Attributes" to annotate object fields. When objects are added to the cache, this approach automatically creates indexes on the specified fields. Code Snippet Java String cacheName = "demoCache"; // Connect to the cache and return a cache handle Cache cache = CacheManager.getCache(cacheName); // Adds all the products to the cache. This automatically creates indexes on various // attributes of Product object by using "Custom Attributes". addSampleData(cache); // $VALUE$ keyword means the entire object instead of individual attributes that are also possible String sql = "SELECT $VALUE$ FROM com.alachisoft.ncache.samples.Product WHERE category IN (?, ?) AND price < ?"; QueryCommand sqlCommand = new QueryCommand(sql); List<String> catParamList = new ArrayList<>(Arrays.asList(("Electronics"), ("Stationery"))); sqlCommand.getParameters().put("category", catParamList); sqlCommand.getParameters().put("price", 2000); // ExecuteReader returns ICacheReader with the query resultset CacheReader resultSet = cache.getSearchService().executeReader(sqlCommand); List<Product> fetchedProducts = new ArrayList<>(); if (resultSet.getFieldCount() > 0) { while (resultSet.read()) { // getValue() with $VALUE$ keyword returns the entire object instead of just one column fetchedProducts.add(resultSet.getValue("$VALUE$", Product.class)); } } printProducts(fetchedProducts); Utilize SQL in NCache to perform queries on cached data by focusing on object attributes and Tags, rather than solely relying on keys. In this example, we utilize "Custom Attributes" to generate an index on the Product object. Learn more about SQL Query with NCache in Java Query Data in Cache Using SQL Read-Thru and Write-Thru Utilize the Data Source Providers feature of NCache to position it as the primary interface for data access within your microservices architecture. When a microservice needs data, it should first query the cache. If the data is present, the cache supplies it directly. Otherwise, the cache employs a read-thru handler to fetch the data from the datastore on behalf of the client, caches it, and then provides it to the microservice. In a similar fashion, for write operations (such as Add, Update, Delete), a microservice can perform these actions on the cache. The cache then automatically carries out the corresponding write operation on the datastore using a write-thru handler. Furthermore, you have the option to compel the cache to fetch data directly from the datastore, regardless of the presence of a possibly outdated version in the cache. This feature is essential when microservices require the most current information and complements the previously mentioned cache consistency strategies. The integration of the Data Source Provider feature not only simplifies your application code but also, when combined with NCache's database synchronization capabilities, ensures that the cache is consistently updated with fresh data for processing. ReadThruProvider For implementing Read-Through caching, it's necessary to create an implementation of the ReadThruProvider interface in Java Here's a code snippet to get started with implementing Read-Thru in your microservices: Java ReadThruOptions readThruOptions = new ReadThruOptions(ReadMode.ReadThru, _readThruProviderName); product = _cache.get(_productId, readThruOptions, Product.class); Read more about Read-Thru implementation here: Read-Through Provider Configuration and Implementation WriteThruProvider: For implementing Write-Through caching, it's necessary to create an implementation of the WriteThruProvider interface in Java The code snippet to get started with implementing Write-Thru in your microservices: Java _product = new Product(); WriteThruOptions writeThruOptions = new WriteThruOptions(WriteMode.WriteThru, _writeThruProviderName) CacheItem cacheItem= new CacheItem(_customer) _cache.insert(_product.getProductID(), cacheItem, writeThruOptions); Read more about Write-Thru implementation here: Write-Through Provider Configuration and Implementation Summary Microservices are designed to be autonomous, enabling independent development, testing, and deployment from other microservices. While microservices provide benefits in scalability and rapid development cycles, some components of the application stack can present challenges. One such challenge is the use of relational databases, which may not support the necessary scale-out to handle growing loads. This is where a distributed caching solution like NCache becomes valuable. In this article, we have seen the variety of ready-to-use features like pub/sub messaging, data caching, SQL Query, Read-Thru and Write-Thru, and Hibernate second-level Java Cache techniques offered by NCache that simplify and streamline the integration of data caching into your microservices application, making it an effortless and natural extension.

By Gowtham K
Leveraging Java's Fork/Join Framework for Efficient Parallel Programming: Part 1
Leveraging Java's Fork/Join Framework for Efficient Parallel Programming: Part 1

In concurrent programming, efficient parallelism is essential for maximizing the performance of applications. Java, being a popular programming language for various domains, provides robust support for parallel programming through its Fork/Join framework. This framework enables developers to write concurrent programs that leverage multicore processors effectively. In this comprehensive guide, we'll delve into the intricacies of the Fork/Join framework, explore its underlying principles, and provide practical examples to demonstrate its usage. Key Components ForkJoinPool: The central component of the Fork/Join Framework is ForkJoinPool, which manages a pool of worker threads responsible for executing tasks. It automatically scales the number of threads based on the available processors, optimizing resource utilization. ForkJoinTask: ForkJoinTaskis an abstract class representing a task that can be executed asynchronously. It provides two main subclasses: RecursiveTask: Used for tasks that return a result RecursiveAction: Used for tasks that don't return a result (i.e., void tasks) ForkJoinWorkerThread: This class represents worker threads within the ForkJoinPool. It provides hooks for customization, allowing developers to define thread-specific behavior. Deep Dive Into Fork/Join Workflow Task partitioning: When a task is submitted to the ForkJoinPool, it's initially executed sequentially until a certain threshold is reached. Beyond this threshold, the task is recursively split into smaller subtasks, which are distributed among the worker threads. Task execution: Worker threads execute the subtasks assigned to them in parallel. If a thread encounters a subtask marked for further division (i.e., "forked"), it splits the task and submits the subtasks to the pool. Result aggregation: Once the subtasks complete their execution, their results are combined to produce the final result. This process continues recursively until all subtasks are completed, and the final result is obtained. Take, for instance, a task designed to calculate the sum of values in an integer array. For small arrays, the task computes the sum directly. For larger arrays, it splits the array and assigns the subarrays to new tasks, which are then executed in parallel. Java class ArraySumCalculator extends RecursiveTask<Integer> { private int[] array; private int start, end; ArraySumCalculator(int[] array, int start, int end) { this.array = array; this.start = start; this.end = end; } @Override protected Integer compute() { if (end - start <= THRESHOLD) { int sum = 0; for (int i = start; i < end; i++) { sum += array[i]; } return sum; } else { int mid = start + (end - start) / 2; ArraySumCalculator leftTask = new ArraySumCalculator(array, start, mid); ArraySumCalculator rightTask = new ArraySumCalculator(array, mid, end); leftTask.fork(); int rightSum = rightTask.compute(); int leftSum = leftTask.join(); return leftSum + rightSum; } } } This task can then be executed by a ForkJoinPool: Java ForkJoinPool pool = new ForkJoinPool(); Integer totalSum = pool.invoke(new ArraySumCalculator(array, 0, array.length)); The Mechanics Behind ForkJoinPool The ForkJoinPool distinguishes itself as a specialized variant of ExecutorService, adept at managing a vast array of tasks, particularly those that adhere to the recursive nature of Fork/Join operations. Here's a breakdown of its fundamental components and operational dynamics: The Work-Stealing Paradigm Individual task queues: Every worker thread within a ForkJoinPool is equipped with its deque (double-ended queue) for tasks. Newly initiated tasks by a thread are placed at the head of its deque. Task redistribution: Threads that deplete their task queue engage in "stealing" tasks from the bottom of other threads' deques. This strategy of redistributing work ensures a more even workload distribution among threads, enhancing efficiency and resource utilization. ForkJoinTask Dynamics Task division: The act of forking divides a larger task into smaller, manageable subtasks, which are then dispatched to the pool for execution by available threads. This division places the subdivided tasks into the initiating thread's deque. Task completion: When a task awaits the completion of its forked subtasks (through the join method), it doesn't remain idle but instead seeks out other tasks to execute, either from its deque or by stealing, maintaining active participation in the pool's workload. Task Processing Logic Execution order: Worker threads typically process tasks in a last-in-first-out (LIFO) sequence, optimizing for tasks that are likely interconnected and could benefit from data locality. Conversely, the stealing process adheres to a first-in-first-out (FIFO) sequence, promoting a balanced task distribution. Adaptive Thread Management Responsive scaling: The ForkJoinPool dynamically adjusts its active thread count in response to the current workload and task characteristics, aiming to balance effective core utilization against the drawbacks of excessive threading, such as overhead and resource contention. Leveraging Internal Mechanics for Performance Optimization Grasping the inner workings of ForkJoinPool is essential for devising effective strategies for task granularity, pool configuration, and task organization: Determining task size: Understanding the individual task queues per thread can inform the decision-making process regarding the optimal task size, balancing between minimizing management overhead and ensuring full exploitation of the work-stealing feature. Tailoring ForkJoinPool settings: Insights into the pool's dynamic thread adjustment capabilities and work-stealing algorithm can guide the customization of pool parameters, such as parallelism levels, to suit specific application demands and hardware capabilities. Ensuring balanced workloads: Knowledge of how tasks are processed and redistributed can aid in structuring tasks to facilitate efficient workload distribution across threads, optimizing resource usage. Strategizing task design: Recognizing the impact of fork and join operations on task execution and thread engagement can lead to more effective task structuring, minimizing downtime, and maximizing parallel efficiency. Complex Use Cases For more complex scenarios, consider tasks that involve recursive data structures or algorithms, such as parallel quicksort or mergesort. These algorithms are inherently recursive and can benefit significantly from the Fork/Join framework's ability to handle nested tasks efficiently. For instance, in a parallel mergesort implementation, the array is divided into halves until the base case is reached. Each half is then sorted in parallel, and the results are merged. This approach can dramatically reduce sorting time for large datasets. Java class ParallelMergeSort extends RecursiveAction { private int[] array; private int start, end; ParallelMergeSort(int[] array, int start, int end) { this.array = array; this.start = start; this.end = end; } @Override protected void compute() { if (end - start <= THRESHOLD) { Arrays.sort(array, start, end); // Direct sort for small arrays } else { int mid = start + (end - start) / 2; ParallelMergeSort left = new ParallelMergeSort(array, start, mid); ParallelMergeSort right = new ParallelMergeSort(array, mid, end); invokeAll(left, right); // Concurrently sort both halves merge(array, start, mid, end); // Merge the sorted halves } } // Method to merge two halves of an array private void merge(int[] array, int start, int mid, int end) { // Implementation of merging logic } } Advanced Tips and Best Practices Dynamic Task Creation In scenarios where the data structure is irregular or the problem size varies significantly, dynamically creating tasks based on the runtime characteristics of the data can lead to more efficient utilization of system resources. Custom ForkJoinPool Management For applications running multiple Fork/Join tasks concurrently, consider creating separate ForkJoinPool instances with custom parameters to optimize the performance of different task types. This allows for fine-tuned control over thread allocation and task handling. Exception Handling Use the ForkJoinTask's get method, which throws an ExecutionException if any of the recursively executed tasks result in an exception. This approach allows for centralized exception handling, simplifying debugging, and error management. Java try { forkJoinPool.invoke(new ParallelMergeSort(array, 0, array.length)); } catch (ExecutionException e) { Throwable cause = e.getCause(); // Get the actual cause of the exception // Handle the exception appropriately } Workload Balancing When dealing with tasks of varying sizes, it's crucial to balance the workload among threads to avoid scenarios where some threads remain idle while others are overloaded. Techniques such as work stealing, as implemented by the Fork/Join framework, are essential in such cases. Avoiding Blocking When a task waits for another task to complete, it can lead to inefficiencies and reduced parallelism. Whenever possible, structure your tasks to minimize blocking operations. Utilizing the join method after initiating all forked tasks helps keep threads active. Performance Monitoring and Profiling Java's VisualVM or similar profiling tools can be invaluable in identifying performance bottlenecks and understanding how tasks are executed in parallel. Monitoring CPU usage, memory consumption, and task execution times helps pinpoint inefficiencies and guide optimizations. For instance, if VisualVM shows that most of the time is spent on a small number of tasks, it might indicate that the task granularity is too coarse, or that certain tasks are much more computationally intensive than others. Load Balancing and Work Stealing The Fork/Join framework's work-stealing algorithm is designed to keep all processor cores busy, but imbalances can still occur, especially with heterogeneous tasks. In such cases, breaking down tasks into smaller parts or using techniques to dynamically adjust the workload can help achieve better load balancing. An example strategy might involve monitoring task completion times and dynamically adjusting the size of future tasks based on this feedback, ensuring that all cores finish their workload at roughly the same time. Avoiding Common Pitfalls Common pitfalls such as unnecessary task splitting, improper use of blocking operations, or neglecting exceptions can degrade performance. Ensuring tasks are divided in a manner that maximizes parallel execution without creating too much overhead is key. Additionally, handling exceptions properly and avoiding blocking operations within tasks can prevent slowdowns and ensure smooth execution. Enhancing Performance With Strategic Tuning Through strategic tuning and optimization, developers can unleash the full potential of the Fork/Join framework, achieving remarkable improvements in the performance of parallel tasks. By carefully considering task granularity, customizing the Fork/JoinPool, diligently monitoring performance, and avoiding pitfalls, applications can be optimized to fully leverage the computational resources available, leading to faster, more efficient parallel processing. Conclusion The Fork/Join framework in Java offers a streamlined approach to parallel programming, abstracting complexities for developers. By mastering its components and inner workings, developers can unlock the full potential of multicore processors. With its intuitive design and efficient task management, the framework enables scalable and high-performance parallel applications. Armed with this understanding, developers can confidently tackle complex computational tasks, optimize performance, and meet the demands of modern computing environments. The Fork/Join framework remains a cornerstone of parallel programming in Java, empowering developers to harness the power of concurrency effectively.

By Andrei Tuchin DZone Core CORE
Hints for Unit Testing With AssertJ
Hints for Unit Testing With AssertJ

Unit testing has become a standard part of development. Many tools can be utilized for it in many different ways. This article demonstrates a couple of hints or, let's say, best practices working well for me. In This Article, You Will Learn How to write clean and readable unit tests with JUnit and Assert frameworks How to avoid false positive tests in some cases What to avoid when writing unit tests Don't Overuse NPE Checks We all tend to avoid NullPointerException as much as possible in the main code because it can lead to ugly consequences. I believe our main concern is not to avoid NPE in tests. Our goal is to verify the behavior of a tested component in a clean, readable, and reliable way. Bad Practice Many times in the past, I've used isNotNull assertion even when it wasn't needed, like in the example below: Java @Test public void getMessage() { assertThat(service).isNotNull(); assertThat(service.getMessage()).isEqualTo("Hello world!"); } This test produces errors like this: Plain Text java.lang.AssertionError: Expecting actual not to be null at com.github.aha.poc.junit.spring.StandardSpringTest.test(StandardSpringTest.java:19) Good Practice Even though the additional isNotNull assertion is not really harmful, it should be avoided due to the following reasons: It doesn't add any additional value. It's just more code to read and maintain. The test fails anyway when service is null and we see the real root cause of the failure. The test still fulfills its purpose. The produced error message is even better with the AssertJ assertion. See the modified test assertion below. Java @Test public void getMessage() { assertThat(service.getMessage()).isEqualTo("Hello world!"); } The modified test produces an error like this: Java java.lang.NullPointerException: Cannot invoke "com.github.aha.poc.junit.spring.HelloService.getMessage()" because "this.service" is null at com.github.aha.poc.junit.spring.StandardSpringTest.test(StandardSpringTest.java:19) Note: The example can be found in SimpleSpringTest. Assert Values and Not the Result From time to time, we write a correct test, but in a "bad" way. It means the test works exactly as intended and verifies our component, but the failure isn't providing enough information. Therefore, our goal is to assert the value and not the comparison result. Bad Practice Let's see a couple of such bad tests: Java // #1 assertThat(argument.contains("o")).isTrue(); // #2 var result = "Welcome to JDK 10"; assertThat(result instanceof String).isTrue(); // #3 assertThat("".isBlank()).isTrue(); // #4 Optional<Method> testMethod = testInfo.getTestMethod(); assertThat(testMethod.isPresent()).isTrue(); Some errors from the tests above are shown below. Plain Text #1 Expecting value to be true but was false at java.base/jdk.internal.reflect.DirectConstructorHandleAccessor.newInstance(DirectConstructorHandleAccessor.java:62) at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:502) at com.github.aha.poc.junit5.params.SimpleParamTests.stringTest(SimpleParamTests.java:23) #3 Expecting value to be true but was false at java.base/jdk.internal.reflect.DirectConstructorHandleAccessor.newInstance(DirectConstructorHandleAccessor.java:62) at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:502) at com.github.aha.poc.junit5.ConditionalTests.checkJdk11Feature(ConditionalTests.java:50) Good Practice The solution is quite easy with AssertJ and its fluent API. All the cases mentioned above can be easily rewritten as: Java // #1 assertThat(argument).contains("o"); // #2 assertThat(result).isInstanceOf(String.class); // #3 assertThat("").isBlank(); // #4 assertThat(testMethod).isPresent(); The very same errors as mentioned before provide more value now. Plain Text #1 Expecting actual: "Hello" to contain: "f" at com.github.aha.poc.junit5.params.SimpleParamTests.stringTest(SimpleParamTests.java:23) #3 Expecting blank but was: "a" at com.github.aha.poc.junit5.ConditionalTests.checkJdk11Feature(ConditionalTests.java:50) Note: The example can be found in SimpleParamTests. Group-Related Assertions Together The assertion chaining and a related code indentation help a lot in the test clarity and readability. Bad Practice As we write a test, we can end up with the correct, but less readable test. Let's imagine a test where we want to find countries and do these checks: Count the found countries. Assert the first entry with several values. Such tests can look like this example: Java @Test void listCountries() { List<Country> result = ...; assertThat(result).hasSize(5); var country = result.get(0); assertThat(country.getName()).isEqualTo("Spain"); assertThat(country.getCities().stream().map(City::getName)).contains("Barcelona"); } Good Practice Even though the previous test is correct, we should improve the readability a lot by grouping the related assertions together (lines 9-11). The goal here is to assert result once and write many chained assertions as needed. See the modified version below. Java @Test void listCountries() { List<Country> result = ...; assertThat(result) .hasSize(5) .singleElement() .satisfies(c -> { assertThat(c.getName()).isEqualTo("Spain"); assertThat(c.getCities().stream().map(City::getName)).contains("Barcelona"); }); } Note: The example can be found in CountryRepositoryOtherTests. Prevent False Positive Successful Test When any assertion method with the ThrowingConsumer argument is used, then the argument has to contain assertThat in the consumer as well. Otherwise, the test would pass all the time - even when the comparison fails, which means the wrong test. The test fails only when an assertion throws a RuntimeException or AssertionError exception. I guess it's clear, but it's easy to forget about it and write the wrong test. It happens to me from time to time. Bad Practice Let's imagine we have a couple of country codes and we want to verify that every code satisfies some condition. In our dummy case, we want to assert that every country code contains "a" character. As you can see, it's nonsense: we have codes in uppercase, but we aren't applying case insensitivity in the assertion. Java @Test void assertValues() throws Exception { var countryCodes = List.of("CZ", "AT", "CA"); assertThat( countryCodes ) .hasSize(3) .allSatisfy(countryCode -> countryCode.contains("a")); } Surprisingly, our test passed successfully. Good Practice As mentioned at the beginning of this section, our test can be corrected easily with additional assertThat in the consumer (line 7). The correct test should be like this: Java @Test void assertValues() throws Exception { var countryCodes = List.of("CZ", "AT", "CA"); assertThat( countryCodes ) .hasSize(3) .allSatisfy(countryCode -> assertThat( countryCode ).containsIgnoringCase("a")); } Now the test fails as expected with the correct error message. Plain Text java.lang.AssertionError: Expecting all elements of: ["CZ", "AT", "CA"] to satisfy given requirements, but these elements did not: "CZ" error: Expecting actual: "CZ" to contain: "a" (ignoring case) at com.github.aha.sat.core.clr.AppleTest.assertValues(AppleTest.java:45) Chain Assertions The last hint is not really the practice, but rather the recommendation. The AssertJ fluent API should be utilized in order to create more readable tests. Non-Chaining Assertions Let's consider listLogs test, whose purpose is to test the logging of a component. The goal here is to check: Asserted number of collected logs Assert existence of DEBUG and INFO log message Java @Test void listLogs() throws Exception { ListAppender<ILoggingEvent> logAppender = ...; assertThat( logAppender.list ).hasSize(2); assertThat( logAppender.list ).anySatisfy(logEntry -> { assertThat( logEntry.getLevel() ).isEqualTo(DEBUG); assertThat( logEntry.getFormattedMessage() ).startsWith("Initializing Apple"); }); assertThat( logAppender.list ).anySatisfy(logEntry -> { assertThat( logEntry.getLevel() ).isEqualTo(INFO); assertThat( logEntry.getFormattedMessage() ).isEqualTo("Here's Apple runner" ); }); } Chaining Assertions With the mentioned fluent API and the chaining, we can change the test this way: Java @Test void listLogs() throws Exception { ListAppender<ILoggingEvent> logAppender = ...; assertThat( logAppender.list ) .hasSize(2) .anySatisfy(logEntry -> { assertThat( logEntry.getLevel() ).isEqualTo(DEBUG); assertThat( logEntry.getFormattedMessage() ).startsWith("Initializing Apple"); }) .anySatisfy(logEntry -> { assertThat( logEntry.getLevel() ).isEqualTo(INFO); assertThat( logEntry.getFormattedMessage() ).isEqualTo("Here's Apple runner" ); }); } Note: the example can be found in AppleTest. Summary and Source Code The AssertJ framework provides a lot of help with their fluent API. In this article, several tips and hints were presented in order to produce clearer and more reliable tests. Please be aware that most of these recommendations are subjective. It depends on personal preferences and code style. The used source code can be found in my repositories: spring-advanced-training junit-poc

By Arnošt Havelka DZone Core CORE
Building and Deploying a Chatbot With Google Cloud Run and Dialogflow
Building and Deploying a Chatbot With Google Cloud Run and Dialogflow

In this tutorial, we will learn how to build and deploy a conversational chatbot using Google Cloud Run and Dialogflow. This chatbot will provide responses to user queries on a specific topic, such as weather information, customer support, or any other domain you choose. We will cover the steps from creating the Dialogflow agent to deploying the webhook service on Google Cloud Run. Prerequisites A Google Cloud Platform (GCP) account Basic knowledge of Python programming Familiarity with Google Cloud Console Step 1: Set Up Dialogflow Agent Create a Dialogflow Agent: Log into the Dialogflow Console (Google Dialogflow). Click on "Create Agent" and fill in the agent details. Select the Google Cloud Project you want to associate with this agent. Define Intents: Intents classify the user's intentions. For each intent, specify examples of user phrases and the responses you want Dialogflow to provide. For example, for a weather chatbot, you might create an intent named "WeatherInquiry" with user phrases like "What's the weather like in Dallas?" and set up appropriate responses. Step 2: Develop the Webhook Service The webhook service processes requests from Dialogflow and returns dynamic responses. We'll use Flask, a lightweight WSGI web application framework in Python, to create this service. Set Up Your Development Environment: Ensure you have Python and pip installed. Create a new directory for your project and set up a virtual environment: Shell python -m venv env source env/bin/activate # `env\Scripts\activate` for windows Install Dependencies: Install Flask and the Dialogflow library: Shell pip install Flask google-cloud-dialogflow Create the Flask App: In your project directory, create a file named app.py. This file will contain the Flask application: Python from flask import Flask, request, jsonify app = Flask(__name__) @app.route('/webhook', methods=['POST']) def webhook(): req = request.get_json(silent=True, force=True) # Process the request here. try: query_result = req.get('queryResult') intent_name = query_result.get('intent').get('displayName') response_text = f"Received intent: {intent_name}" return jsonify({'fulfillmentText': response_text}) except AttributeError: return jsonify({'fulfillmentText': "Error processing the request"}) if __name__ == '__main__': app.run(debug=True) Step 3: Deploy To Google Cloud Run Google Cloud Run is a managed platform that enables you to run containers statelessly over a fully managed environment or in your own Google Kubernetes Engine cluster. Containerize the Flask App: Create a Dockerfile in your project directory: Dockerfile FROM python:3.8-slim WORKDIR /app COPY requirements.txt requirements.txt RUN pip install -r requirements.txt COPY . . CMD ["flask", "run", "--host=0.0.0.0", "--port=8080"] Don't forget to create a requirements.txt file listing your Python dependencies: Flask==1.1.2 google-cloud-dialogflow==2.4.0 Build and Push the Container: Use Cloud Build to build your container image and push it to the container registry. Shell gcloud builds submit --tag gcr.io/YOUR_CHATBOT_PRJ_ID/chatbot-webhook . Deploy to Cloud Run: Deploy your container image to Cloud Run. Shell gcloud run deploy --image gcr.io/YOUR_PROJECT_ID/chatbot-webhook --platform managed Follow the prompts to enable the required APIs, choose a region, and allow unauthenticated invocations. Step 4: Integrate With Dialogflow In the Dialogflow Console, navigate to the Fulfillment section. Enable Webhook, paste the URL of your Cloud Run service (you get this URL after deploying to Cloud Run), and click "Save." Testing and Iteration Test your chatbot in the Dialogflow Console's simulator. You can refine your intents, entities, and webhook logic based on the responses you receive. Conclusion You have successfully built and deployed a conversational chatbot using Google Cloud Run and Dialogflow. This setup allows you to create scalable, serverless chatbots that can handle dynamic responses to user queries. This foundation allows for further customization and expansion, enabling the development of more complex and responsive chatbots to meet a variety of needs. Continue to refine your chatbot by adjusting intents, entities, and the webhook logic to improve interaction quality and user experience.

By Ashok Gorantla DZone Core CORE
Advanced Brain-Computer Interfaces With Java
Advanced Brain-Computer Interfaces With Java

In the first part of this series, we introduced the basics of brain-computer interfaces (BCIs) and how Java can be employed in developing BCI applications. In this second part, let's delve deeper into advanced concepts and explore a real-world example of a BCI application using NeuroSky's MindWave Mobile headset and their Java SDK. Advanced Concepts in BCI Development Motor Imagery Classification: This involves the mental rehearsal of physical actions without actual execution. Advanced machine learning algorithms like deep learning models can significantly improve classification accuracy. Event-Related Potentials (ERPs): ERPs are specific patterns in brain signals that occur in response to particular events or stimuli. Developing BCI applications that exploit ERPs requires sophisticated signal processing techniques and accurate event detection algorithms. Hybrid BCI Systems: Hybrid BCI systems combine multiple signal acquisition methods or integrate BCIs with other physiological signals (like eye tracking or electromyography). Developing such systems requires expertise in multiple signal acquisition and processing techniques, as well as efficient integration of different modalities. Real-World BCI Example Developing a Java Application With NeuroSky's MindWave Mobile NeuroSky's MindWave Mobile is an EEG headset that measures brainwave signals and provides raw EEG data. The company provides a Java-based SDK called ThinkGear Connector (TGC), enabling developers to create custom applications that can receive and process the brainwave data. Step-by-Step Guide to Developing a Basic BCI Application Using the MindWave Mobile and TGC Establish Connection: Use the TGC's API to connect your Java application with the MindWave Mobile device over Bluetooth. The TGC provides straightforward methods for establishing and managing this connection. Java ThinkGearSocket neuroSocket = new ThinkGearSocket(this); neuroSocket.start(); Acquire Data: Once connected, your application will start receiving raw EEG data from the device. This data includes information about different types of brainwaves (e.g., alpha, beta, gamma), as well as attention and meditation levels. Java public void onRawDataReceived(int rawData) { // Process raw data } Process Data: Use signal processing techniques to filter out noise and extract useful features from the raw data. The TGC provides built-in methods for some basic processing tasks, but you may need to implement additional processing depending on your application's needs. Java public void onEEGPowerReceived(EEGPower eegPower) { // Process EEG power data } Interpret Data: Determine the user's mental state or intent based on the processed data. This could involve setting threshold levels for certain values or using machine learning algorithms to classify the data. For example, a high attention level might be interpreted as the user wanting to move a cursor on the screen. Java public void onAttentionReceived(int attention) { // Interpret attention data } Perform Action: Based on the interpretation of the data, have your application perform a specific action. This could be anything from moving a cursor, controlling a game character, or adjusting the difficulty level of a task. Java if (attention > ATTENTION_THRESHOLD) { // Perform action } Improving BCI Performance With Java Optimize Signal Processing: Enhance the quality of acquired brain signals by implementing advanced signal processing techniques, such as adaptive filtering or blind source separation. Employ Advanced Machine Learning Algorithms: Utilize state-of-the-art machine learning models, such as deep neural networks or ensemble methods, to improve classification accuracy and reduce user training time. Libraries like DeepLearning4j or TensorFlow Java can be employed for this purpose. Personalize BCI Models: Customize BCI models for individual users by incorporating user-specific features or adapting the model parameters during operation. This can be achieved using techniques like transfer learning or online learning. Implement Efficient Real-Time Processing: Ensure that your BCI application can process brain signals and generate output commands in real time. Optimize your code, use parallel processing techniques, and leverage Java's concurrency features to achieve low-latency performance. Evaluate and Validate Your BCI Application: Thoroughly test your BCI application on a diverse group of users and under various conditions to ensure its reliability and usability. Employ standard evaluation metrics and follow best practices for BCI validation. Conclusion Advanced BCI applications require a deep understanding of brain signal acquisition, processing, and classification techniques. Java, with its extensive libraries and robust performance, is an excellent choice for implementing such applications. By exploring advanced concepts, developing real-world examples, and continuously improving BCI performance, developers can contribute significantly to this revolutionary field.

By Arun Pandey DZone Core CORE
Top Secrets Management Tools for 2024
Top Secrets Management Tools for 2024

Managing your secrets well is imperative in software development. It's not just about avoiding hardcoding secrets into your code, your CI/CD configurations, and more. It's about implementing tools and practices that make good secrets management almost second nature. A Quick Overview of Secrets Management What is a secret? It's any bit of code, text, or binary data that provides access to a resource or data that should have restricted access. Almost every software development process involves secrets: credentials for your developers to access your version control system (VCS) like GitHub, credentials for a microservice to access a database, and credentials for your CI/CD system to push new artifacts to production. There are three main elements to secrets management: How are you making them available to the people/resources that need them? How are you managing the lifecycle/rotation of your secrets? How are you scanning to ensure that the secrets are not being accidentally exposed? We'll look at elements one and two in terms of the secrets managers in this article. For element three, well, I'm biased toward GitGuardian because I work there (disclaimer achieved). Accidentally exposed secrets don't necessarily get a hacker into the full treasure trove, but even if they help a hacker get a foot in the door, it's more risk than you want. That's why secrets scanning should be a part of a healthy secrets management strategy. What To Look for in a Secrets Management Tool In the Secrets Management Maturity Model, hardcoding secrets into code in plaintext and then maybe running a manual scan for them is at the very bottom. Manually managing unencrypted secrets, whether hardcoded or in a .env file, is considered immature. To get to an intermediate level, you need to store them outside your code, encrypted, and preferably well-scoped and automatically rotated. It's important to differentiate between a key management system and a secret management system. Key management systems are meant to generate and manage cryptographic keys. Secrets managers will take keys, passwords, connection strings, cryptographic salts, and more, encrypt and store them, and then provide access to them for personnel and infrastructure in a secure manner. For example, AWS Key Management Service (KMS) and AWS Secrets Manager (discussed below) are related but are distinct brand names for Amazon. Besides providing a secure way to store and provide access to secrets, a solid solution will offer: Encryption in transit and at rest: The secrets are never stored or transmitted unencrypted. Automated secrets rotation: The tool can request changes to secrets and update them in its files in an automated manner on a set schedule. Single source of truth: The latest version of any secret your developers/resources need will be found there, and it is updated in real-time as keys are rotated. Role/identity scoped access: Different systems or users are granted access to only the secrets they need under the principle of least privilege. That means a microservice that accesses a MongoDB instance only gets credentials to access that specific instance and can't pull the admin credentials for your container registry. Integrations and SDKs: The service has APIs with officially blessed software to connect common resources like CI/CD systems or implement access in your team's programming language/framework of choice. Logging and auditing: You need to check your systems periodically for anomalous results as a standard practice; if you get hacked, the audit trail can help you track how and when each secret was accessed. Budget and scope appropriate: If you're bootstrapping with 5 developers, your needs will differ from those of a 2,000-developer company with federal contracts. Being able to pay for what you need at the level you need it is an important business consideration. The Secrets Manager List Cyberark Conjur Secrets Manager Enterprise Conjur was founded in 2011 and was acquired by Cyberark in 2017. It's grown to be one of the premiere secrets management solutions thanks to its robust feature set and large number of SDKs and integrations. With Role Based Access Controls (RBAC) and multiple authentication mechanisms, it makes it easy to get up and running using existing integrations for top developer tools like Ansible, AWS CloudFormation, Jenkins, GitHub Actions, Azure DevOps, and more. You can scope secrets access to the developers and systems that need the secrets. For example, a Developer role that accesses Conjur for a database secret might get a connection string for a test database when they're testing their app locally, while the application running in production gets the production database credentials. The Cyberark site boasts an extensive documentation set and robust REST API documentation to help you get up to speed, while their SDKs and integrations smooth out a lot of the speed bumps. In addition, GitGuardian and CyberArk have partnered to create a bridge to integrate CyberArk Conjur and GitGuardian's Has My Secrets Leaked. This is now available as an open-source project on GitHub, providing a unique solution for security teams to detect leaks and manage secrets seamlessly. Google Cloud Secret Manager When it comes to choosing Amazon Web Services (AWS), Google Cloud Platform (GCP), or Microsoft Azure (Azure), it's usually going to come down to where you're already investing your time and money. In a multi-cloud architecture, you might have resources spread across the three, but if you're automatically rotating secrets and trying to create consistency for your services, you'll likely settle on one secrets manager as a single source of truth for third-party secrets rather than spreading secrets across multiple services. While Google is behind Amazon and Microsoft in market share, it sports the features you expect from a service competing for that market, including: Encryption at rest and in transit for your secrets CLI and SDK access to secrets Logging and audit trails Permissioning via IAM CI/CD integrations with GitHub Actions, Hashicorp Terraform, and more. Client libraries for eight popular programming languages. Again, whether to choose it is more about where you're investing your time and money rather than a killer function in most cases. AWS Secrets Manager Everyone with an AWS certification, whether developer or architect, has heard of or used AWS Secrets Manager. It's easy to get it mixed up with AWS Key Management System (KMS), but the Secrets Manager is simpler. KMS creates, stores, and manages cryptographic keys. Secrets Manager lets you put stuff in a vault and retrieve it when needed. A nice feature of AWS Secrets Manager is that it can connect with a CI/CD tool like GitHub actions through OpenID Connect (OIDC), and you can create different IAM roles with tightly scoped permissions, assigning them not only to individual repositories but specific branches. AWS Secrets Manager can store and retrieve non-AWS secrets as well as use the roles to provide access to AWS services to a CI/CD tool like GitHub Actions. Using AWS Lambda, key rotation can be automated, which is probably the most efficient way, as the key is updated in the secrets manager milliseconds after it's changed, producing the minimum amount of disruption. As with any AWS solution, it's a good idea to create multi-region or multi-availability-zone replicas of your secrets, so if your secrets are destroyed by a fire or taken offline by an absent-minded backhoe operator, you can fail over to a secondary source automatically. At $0.40 per secret per month, it's not a huge cost for added resiliency. Azure Key Vault Azure is the #2 player in the cloud space after AWS. Their promotional literature touts their compatibility with FIPS 140-2 standards and Hardware Security Modules (HSMs), showing they have a focus on customers who are either government agencies or have business with government agencies. This is not to say that their competitors are not suitable for government or government-adjacent solutions, but that Microsoft pushes that out of the gate as a key feature. Identity-managed access, auditability, differentiated vaults, and encryption at rest and in transit are all features they share with competitors. As with most Microsoft products, it tries to be very Microsoft and will more than likely appeal more to .Net developers who use Microsoft tools and services already. While it does offer a REST API, the selection of officially blessed client libraries (Java, .Net, Spring, Python, and JavaScript) is thinner than you'll find with AWS or GCP. As noted in the AWS and GCP entries, a big factor in your decision will be which cloud provider is getting your dominant investment of time and money. And if you're using Azure because you're a Microsoft shop with a strong investment in .Net, then the choice will be obvious. Doppler While CyberArk's Conjur (discussed above) started as a solo product that was acquired and integrated into a larger suite, Doppler currently remains a standalone key vault solution. That might be attractive for some because it's cloud-provider agnostic, coding language agnostic, and has to compete on its merits instead of being the default secrets manager for a larger package of services. It offers logging, auditing, encryption at rest and in transit, and a list of integrations as long as your arm. Besides selling its abilities, it sells its SOC compliance and remediation functionalities on the front page. When you dig deeper, there's a list of integrations as long as your arm testifies to its usefulness for integrating with a wide variety of services, and its list of SDKs is more robust than Azure's. It seems to rely strongly on injecting environment variables, which can make a lot of your coding easier at the cost of the environment variables potentially ending up in run logs or crash dumps. Understanding how the systems with which you're using it treat environment variables, scope them, and the best ways to implement it with them will be part of the learning curve in adopting it. Infisical Like Doppler, Infisical uses environment variable injection. Similar to the Dotenv package for Node, when used in Node, it injects them at run time into the process object of the running app so they're not readable by any other processes or users. They can still be revealed by a crash dump or logging, so that is a caveat to consider in your code and build scripts. Infisical offers other features besides a secrets vault, such as configuration sharing for developer teams and secrets scanning for your codebase, git history, and as a pre-commit hook. You might ask why someone writing for GitGuardian would mention a product with a competing feature. Aside from the scanning, their secrets and configuration vault/sharing model offers virtual secrets, over 20 cloud integrations, nine CI/CD integrations, over a dozen framework integrations, and SDKs for four programming languages. Their software is mostly open-source, and there is a free tier, but features like audit logs, RBAC, and secrets rotation are only available to paid subscribers. Akeyless AKeyless goes all out features, providing a wide variety of authentication and authorization methods for how the keys and secrets it manages can be accessed. It supports standards like RBAC and OIDC as well as 3rd party services like AWS IAM and Microsoft Active Directory. It keeps up with the competition in providing encryption at rest and in transit, real-time access to secrets, short-lived secrets and keys, automated rotation, and auditing. It also provides features like just-in-time zero trust access, a password manager for browser-based access control as well as password sharing with short-lived, auto-expiring passwords for third parties that can be tracked and audited. In addition to 14 different authentication options, it offers seven different SDKs and dozens of integrations for platforms ranging from Azure to MongoDB to Remote Desktop Protocol. They offer a reasonable free tier that includes 3-days of log retention (as opposed to other platforms where it's a paid feature only). 1Password You might be asking, "Isn't that just a password manager for my browser?" If you think that's all they offer, think again. They offer consumer, developer, and enterprise solutions, and what we're going to look at is their developer-focused offering. Aside from zero-trust models, access control models, integrations, and even secret scanning, one of their claims that stands out on the developer page is "Go ahead – commit your .env files with confidence." This stands out because .env files committed to source control are a serious source of secret sprawl. So, how are they making that safe? You're not putting secrets into your .env files. Instead, you're putting references to your secrets that allow them to be loaded from 1Password using their services and access controls. This is somewhat ingenious as it combines a format a lot of developers know well with 1Password's access controls. It's not plug-and-play and requires a bit of a learning curve, but familiarity doesn't always breed contempt. Sometimes it breeds confidence. While it has a limited number of integrations, it covers some of the biggest Kubernetes and CI/CD options. On top of that, it has dozens and dozens of "shell plugins" that help you secure local CLI access without having to store plaintext credentials in ~/.aws or another "hidden" directory. And yes, we mentioned they offer secrets scanning as part of their offering. Again, you might ask why someone writing for GitGuardian would mention a product with a competing feature. HashiCorp Vault HashiCorp Vault offers secrets management, key management, and more. It's a big solution with a lot of features and a lot of options. Besides encryption, role/identity-based secrets access, dynamic secrets, and secrets rotation, it offers data encryption and tokenization to protect data outside the vault. It can act as an OIDC provider for back-end connections as well as sporting a whopping seventy-five integrations in its catalog for the biggest cloud and identity providers. It's also one of the few to offer its own training and certification path if you want to add being Hashi Corp Vault certified to your resume. It has a free tier for up to 25 secrets and limited features. Once you get past that, it can get pricey, with monthly fees of $1,100 or more to rent a cloud server at an hourly rate. In Summary Whether it's one of the solutions we recommended or another solution that meets our recommendations of what to look for above, we strongly recommend integrating a secret management tool into your development processes. If you still need more convincing, we'll leave you with this video featuring GitGuardian's own Mackenzie Jackson.

By Greg Bulmash
Python Function Pipelines: Streamlining Data Processing
Python Function Pipelines: Streamlining Data Processing

Function pipelines allow seamless execution of multiple functions in a sequential manner, where the output of one function serves as the input to the next. This approach helps in breaking down complex tasks into smaller, more manageable steps, making code more modular, readable, and maintainable. Function pipelines are commonly used in functional programming paradigms to transform data through a series of operations. They promote a clean and functional style of coding, emphasizing the composition of functions to achieve desired outcomes. In this article, we will explore the fundamentals of function pipelines in Python, including how to create and use them effectively. We'll discuss techniques for defining pipelines, composing functions, and applying pipelines to real-world scenarios. Creating Function Pipelines in Python In this segment, we'll explore two instances of function pipelines. In the initial example, we'll define three functions—'add', 'multiply', and 'subtract'—each designed to execute a fundamental arithmetic operation as implied by its name. Python def add(x, y): return x + y def multiply(x, y): return x * y def subtract(x, y): return x - y Next, create a pipeline function that takes any number of functions as arguments and returns a new function. This new function applies each function in the pipeline to the input data sequentially. Python # Pipeline takes multiple functions as argument and returns an inner function def pipeline(*funcs): def inner(data): result = data # Iterate thru every function for func in funcs: result = func(result) return result return inner Let’s understand the pipeline function. The pipeline function takes any number of functions (*funcs) as arguments and returns a new function (inner). The inner function accepts a single argument (data) representing the input data to be processed by the function pipeline. Inside the inner function, a loop iterates over each function in the funcs list. For each function func in the funcs list, the inner function applies func to the result variable, which initially holds the input data. The result of each function call becomes the new value of result. After all functions in the pipeline have been applied to the input data, the inner function returns the final result. Next, we create a function called ‘calculation_pipeline’ that passes the ‘add’, ‘multiply’ and ‘substract’ to the pipeline function. Python # Create function pipeline calculation_pipeline = pipeline( lambda x: add(x, 5), lambda x: multiply(x, 2), lambda x: subtract(x, 10) ) Then we can test the function pipeline by passing an input value through the pipeline. Python result = calculation_pipeline(10) print(result) # Output: 20 We can visualize the concept of a function pipeline through a simple diagram. Another example: Python def validate(text): if text is None or not text.strip(): print("String is null or empty") else: return text def remove_special_chars(text): for char in "!@#$%^&*()_+{}[]|\":;'<>?,./": text = text.replace(char, "") return text def capitalize_string(text): return text.upper() # Pipeline takes multiple functions as argument and returns an inner function def pipeline(*funcs): def inner(data): result = data # Iterate thru every function for func in funcs: result = func(result) return result return inner # Create function pipeline str_pipeline = pipeline( lambda x : validate(x), lambda x: remove_special_chars(x), lambda x: capitalize_string(x) ) Testing the pipeline by passing the correct input: Python # Test the function pipeline result = str_pipeline("Test@!!!%#Abcd") print(result) # TESTABCD In case of an empty or null string: Python result = str_pipeline("") print(result) # Error In the example, we've established a pipeline that begins by validating the input to ensure it's not empty. If the input passes this validation, it proceeds to the 'remove_special_chars' function, followed by the 'Capitalize' function. Benefits of Creating Function Pipelines Function pipelines encourage modular code design by breaking down complex tasks into smaller, composable functions. Each function in the pipeline focuses on a specific operation, making it easier to understand and modify the code. By chaining together functions in a sequential manner, function pipelines promote clean and readable code, making it easier for other developers to understand the logic and intent behind the data processing workflow. Function pipelines are flexible and adaptable, allowing developers to easily modify or extend existing pipelines to accommodate changing requirements.

By Sameer Shukla DZone Core CORE
Insights From AWS Re:Invent 2023
Insights From AWS Re:Invent 2023

AWS re:Invent is an annual conference hosted by Amazon Web Services. AWS re:Invent 2023 stood out as a beacon of innovation, education, and vision in cloud computing. Held in Las Vegas, Nevada, spread over five days, the conference was one of the largest gatherings in the cloud sector, attracting an estimated 65,000+ attendees from around the globe. Having had the privilege to attend this year (2023), I am excited to share the key takeaways from the conference and interactions with some of the brightest minds in cloud computing. I aim to inspire and shed light on the expansive possibilities cloud technology offers. AWS Aurora Limitless Database In today’s world, enterprise applications typically rely on backend databases to host all the data necessary for the application. As you add new capabilities to your application or there is a growth in the customer base on your application, the volume of data hosted by the database surges rapidly, and the number of transactions that require database interaction increases significantly. There are many proven ways to manage this increased load to your database that can enhance the performance of the backing database. For example, we can scale up our database by allocating more vCPU and memory. Optimizing the SQL queries or using advanced features like “Input-Output optimized reads” from Amazon Aurora databases can significantly enhance the performance. We can also add additional read-only (read replicas) nodes/workers to support additional interaction from the database, which only requires read operation. However, before the AWS Aurora Limitless database launched, no out-of-the-box features were available that allowed data to be distributed across multiple database instances - a process known as database sharding. Sharding allows each instance to handle parallel write requests, significantly enhancing write operation performance. However, sharding requires the application team to add logic within the application to determine which database instance should serve that request. In addition, sharding also introduces enormous complexity, as the application must manage the ACID transactions and ensure consistency guarantees. Amazon Aurora Limitless Database addresses these challenges by handling the scalability of sharded databases with the simplicity of managing a single database. It also maintains transactional consistency across the system, which allows for handling millions of transactions per second and managing petabytes of data within a single Aurora cluster. As a consumer of the Amazon Aurora Limitless database, you only need to interact with a single database endpoint. The underlying architecture of Amazon Aurora Limitless ensures that write requests are directed to the appropriate database instance. Therefore, if your use case involves processing millions of write requests per second, Amazon Aurora Limitless Database is well-equipped to meet this demand effortlessly. Amazon S3 Express Zone Amazon S3 Express Zone is a single Availability Zone storage class that consistently delivers single-digit millisecond data access for frequently accessed data. When compared to S3 Standard, it delivers data access speed up to 10x faster and request costs up to 50% lower. Amazon S3 Express One Zone is ideal for use cases where you need high performance, low latency, and cost-effective storage solutions while not requiring the multi-availability zone (AZ) data resiliency offered by other S3 storage classes. So, suppose you want to process large amounts of data quickly, such as scientific simulations, big data analytics, or training machine learning models. In that case, S3 Express One Zone supports these intensive workloads by enabling faster data feeding to computation engines. ElastiCache Serverless Before learning more about ElastiCache Serverless, it's essential to understand the role of caching in modern applications. A cache is an in-memory data storage that enables applications to access data quickly, with high speed and low latency, significantly enhancing web applications' performance. Amazon ElastiCache, provided by Amazon Web Services, is a fully managed in-memory data store and caching service compatible with open-source in-memory data stores, such as Redis and Memcached. In the traditional ElastiCache setup, we need to specify the capacity of the ElastiCache cluster upfront while creating the cluster. This capacity remains fixed, leading to potential throttling if demand exceeds this capacity or wasted resources if the demand is consistently below capacity. While it's possible to manually scale resources or implement custom scaling solutions, managing this for applications with continuous, variable traffic can be complex and cumbersome. In contrast, ElastiCache Serverless is a fully managed service from AWS, which eliminates the need for manual capacity management. This serverless model automatically allows horizontal and vertical scaling to match traffic demand without affecting application performance. It continuously monitors the CPU, memory, and network utilization of the ElastiCache cluster to dynamically scale cluster capacity in or out to align with the current demand, ensuring optimal efficiency and performance. ElastiCache Serverless maintains a warm pool of engine nodes, allowing it to add resources on the fly and meet changing demand seamlessly and reasonably quickly. And, since it's a managed service from AWS, we don't have to worry about software updates, as they are handled automatically by AWS. In addition, you pay only for the capacity you use. This can enable cost savings compared to provisioning for peak capacity, especially for workloads with variable traffic patterns. Finally, launching a serverless Elasticache cluster is extremely quick; it can be created within a minute via the AWS console. Amazon Q Amazon Q, launched during AWS: reInvent 2023, is a Generative AI-driven service built to assist IT specialists and developers in navigating the complexities of the entire application development cycle, which includes initial research, development, deployment, and maintenance phases. It integrates seamlessly with your enterprise information repositories and codebases, enabling the generation of content and actions based on enterprise system data. Amazon Q also facilitates the selection of optimal instance types for specific workloads, leading to cost-effective deployment strategies. Additionally, Amazon Q simplifies error resolution across AWS services by providing quick insights without requiring manual log reviews or in-depth research. Furthermore, Amazon Q addresses network connectivity challenges using tools like the Amazon VPC Reachability Analyzer to pinpoint and correct potential network misconfiguration. Its integration with development environments through Amazon CodeWhisperer further enhances its utility, allowing developers to ask questions and receive code explanations and optimizations. This feature is especially beneficial for debugging, testing, and developing new features. While Amazon Q can address a broad spectrum of challenges throughout the application development lifecycle, its capabilities extend far beyond the scope of this article. Machine Learning Capabilities Offered by CloudWatch Amazon CloudWatch is an AWS monitoring service that collects logs, metrics, and events, providing insights into AWS resources and applications. It has been enhanced with machine learning capabilities, which include pattern analysis, comparison analysis, and anomaly detection for efficient log data analysis. The recent introduction of a generative AI feature that generates Logs Insight queries from natural language prompts further simplifies log analysis for cloud users. For a detailed exploration of these features, please refer to this article: Effective Log Data Analysis with Amazon CloudWatch. Additional Highlights from AWS re:Invent 2023 There are several other notable highlights from AWS re:Invent 2023, including Zero ETL integrations with OpenSearch Service, which simplifies data analysis by enabling direct, seamless data transfers without creating complex ETL processes. AWS Glue, a serverless ETL service, added anomaly detection features for improved data quality, and Application Load Balancer now supports automatic target weights based on health indicators like HTTP 500 errors. To explore a full rundown of announcements and in-depth analyses, please see the AWS Blog. Conclusion AWS re:Invent 2023 offered a unique opportunity to dive deep into the cloud technologies shaping our world. It highlighted the path forward in cloud technology, showcasing many innovations and insights. The conference underscores the endless possibilities that AWS continues to unlock for developers, IT professionals, and businesses worldwide.

By Rajat Gupta

The Latest Popular Topics

article thumbnail
How To Implement OAuth2 Security in Microservices
Learn to implement OAuth2 Security in microservices distributed systems using OAuth2, Oauth2-Client, Spring Cloud, and Netflix components with full examples.
Updated March 29, 2024
by Amran Hossain DZone Core CORE
· 75,659 Views · 14 Likes
article thumbnail
Cognitive and Perspective Analytics
Cognitive and perspective analytics represent powerful tools in the analytical era. When used together, they can provide a more complete picture of analytics data.
March 29, 2024
by Prashanth Mally
· 181 Views · 1 Like
article thumbnail
Building AI-Powered Ad Recommendation Systems: Tips and Tricks for Developers
In this blog post, we will delve into the intricacies of building AI-powered ad recommendation systems and key tips and tricks that developers need to consider.
March 29, 2024
by Atul Naithani
· 352 Views · 1 Like
article thumbnail
Empowering Developers: Navigating the AI Revolution in Software Engineering
AI has become a fundamental part of modern software development, impacting devs in both positive and negative ways and highlighting the importance of continuous learning.
March 29, 2024
by Yifei Wang DZone Core CORE
· 373 Views · 1 Like
article thumbnail
Java 22 Brings Major Enhancements for Developers
Java 22 brings major enhancements for developers, including language improvements, concurrency updates, native interoperability, and performance optimizations.
March 29, 2024
by Tom Smith DZone Core CORE
· 537 Views · 1 Like
article thumbnail
The AI Revolution: Transforming the Software Development Lifecycle
In recent years, artificial intelligence (AI) has emerged as a transformative force across various industries, and software development is no exception.
March 28, 2024
by Ruchita Varma
· 738 Views · 1 Like
article thumbnail
The Power of LLMs in Java: Leveraging Quarkus and LangChain4j
This post attempts to demystify the use of LLMs in Java, with Quarkus and LangChain4j, across a ludic and hopefully original project.
March 28, 2024
by Nicolas Duminil DZone Core CORE
· 919 Views · 1 Like
article thumbnail
Deno Security: Building Trustworthy Applications
Deno's design philosophy prioritizes security, making it an ideal choice for building secure, reliable, and trustworthy applications.
March 28, 2024
by Josephine Eskaline Joyce DZone Core CORE
· 1,108 Views · 3 Likes
article thumbnail
Enhancing Performance With Amazon Elasticache Redis: In-Depth Insights Into Cluster and Non-Cluster Modes
The article discusses Amazon Elasticache Redis, a managed in-memory caching service that can enhance the performance and user experience of applications.
March 28, 2024
by Satrajit Basu DZone Core CORE
· 1,127 Views · 1 Like
article thumbnail
Data Streaming for AI in the Financial Services Industry (Part 2)
Learn the data streaming strategies to lay a solid foundation for AI, moving from chaos and going into order for data strategies.
March 27, 2024
by Christina Lin DZone Core CORE
· 298 Views · 1 Like
article thumbnail
Know How To Get Started With LLMs in 2024
As we move into 2024, Large Language Modeling (LLM) will become an essential and fundamental driver of the generative AI space.
March 27, 2024
by Hiren Dhaduk
· 537 Views · 1 Like
article thumbnail
Artificial Intelligence in Data Visualization: Ethics and Trends for 2024
The integration of immersive technologies into data visualization promises to redefine how we interact with and comprehend complex datasets.
March 27, 2024
by Nishan Singh
· 618 Views · 1 Like
article thumbnail
How To Get Started With New Pattern Matching in Java 21
Dive into pattern matching, a powerful new feature in Java 21 that lets you easily deconstruct and analyze data structures. Follow this tutorial for examples.
March 26, 2024
by Daniel Oh DZone Core CORE
· 1,790 Views · 10 Likes
article thumbnail
Empowering Developers Through Collaborative Vulnerability Management: Insights From VulnCon 2024
CVE and FIRST empower developers to create secure software through collaboration, standardization, and best practices in vulnerability management.
March 26, 2024
by Tom Smith DZone Core CORE
· 747 Views · 1 Like
article thumbnail
Navigating the Digital Frontier: A Journey Through Information Technology Progress
Embark on a journey through the key milestones and advancements that have shaped the IT landscape, exploring the technologies driving progress and the implications.
March 26, 2024
by Santosh Sahu
· 898 Views · 2 Likes
article thumbnail
Building Ethical AI Starts With the Data Team, Here’s Why
GenAI is an ethical quagmire. What responsibility do data leaders have to navigate it? We consider the need for ethical AI and why data ethics are AI ethics.
March 26, 2024
by Lior Gavish
· 815 Views · 1 Like
article thumbnail
Enhancing Secure Software Development With ASOC Platforms
Elevate DevSecOps with AI-powered ASOC platforms for faster, secure software builds. Simplify compliance and enhance security. Explore more in this article.
March 26, 2024
by Alex Vakulov
· 1,609 Views · 1 Like
article thumbnail
Converting ActiveMQ to Jakarta (Part II)
In this article, learn more about migrating a mature code base to Jakarta based on the impacts of the new Jakarta framework.
March 25, 2024
by Matt Pavlovich
· 258 Views · 1 Like
article thumbnail
Navigating the AI Renaissance: Practical Insights and Pioneering Use Cases
Taking a look into AI, with real examples and the new use of models like LLaMA, encourages us to imagine a future where AI and human creativity come together.
March 25, 2024
by Rajat Gupta
· 668 Views · 4 Likes
article thumbnail
DZone's Cloud Native Research: Join Us for Our Survey (and $750 Raffle)!
Calling all cloud, K8s, and microservices experts — participate in our cloud native research and enter the raffle for a chance to receive $150.
Updated March 25, 2024
by Caitlin Candelmo
· 30,019 Views · 15 Likes
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • ...
  • Next

ABOUT US

  • About DZone
  • Send feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: