FreshRSS

🔒
❌ À propos de FreshRSS
Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierSogetiLabs

Get under the hood and get your hands dirty

Par Shiv Jeet Rai

“What makes you a good software tester? This is one question that we all have asked ourselves in one form or another. Even though it is a simple question, the answer can be hard to give. After all, every organization is different, every piece of software is different, and every test task is different. That said, there are definitely some traits we as testers need to have. In this blog we will look at some of them.

Work on your programming skills

It is common perception that writing code and debugging is the developer’s job, there is no denying on that. However, a good software tester should possess a reasonable amount of programming skills. Basic knowledge of coding brings a lot value to the table, irrespectively of our role in the project. It helps the tester create tools to quicken the manual testing in terms of both test data generation and verification, e.g. by querying and modifying the database to find and prepare data. It gives a mindset that enables us to easier grasp how the functionality works underneath the hood, and therefore what needs to be tested.

Furthermore, having the ability to read and understand application code also increases our credibility. It makes the developers take us more seriously when bugs are found or concerns about the functionality are raised.

Ask questions and understand the business needs

Having clear and crisp requirement document is not something every tester can dream of. It is therefore important that questions are asked, and that to the right people. This cannot be emphasized enough. It is a common mistake a lot of us do. We simply don’t manage to get the overview of the different people involved that can provide us with information we need.

To prevent that, talk to everyone and make list of what areas they can contribute on. That could be from Marketing, Development, Sales, Support to the CEO. It is imperative that we as testers see, and understand, the bigger picture and understand the organizational structure of the customer. Another point we tend to forget is that “perfect is the enemy of good”. Meaning that if an application is good enough to satisfy the customer’s need, further improvement on that area might not be justified. The special cases are simply costing more to fix rather than having a workaround for them in production. In addition, accounting for such cases might result in a more complex application, which in terms makes the application tougher to maintain. As testers we must keep the inner perfectionist in ourselves at check, and help the business understand what is important them and how any deviation might be handled. Good understanding of return on investment (ROI) is vital.

Share and grow

Sharing knowledge like how the functionality of the application works, to business or other team members, is something many of us usually don’t associate with our day to day tasks.

But sharing knowledge is also a path of gaining knowledge. It will give so much in return as the opposite party will questions and give their views on the topic. By this, we will get more ideas and better understanding for processes and improvements. In addition, the bond towards the business and/or the other team members gets stronger. It is a win-win.

Thus, in my opinion, what makes someone a good tester is among the traits mentioned in this blog. A good tester has some basic programming skills such that he or she can create (or in worst case be able to use) snippets of code to increase their and the team member’s productivity. In addition, the skill set will help improving the communication towards e.g. the developer and the overall quality on the incidents raised, as the root causes becomes easier to pinpoint. A good tester is someone that is not afraid of taking place and asking questions. The tester plays a vital part in ensuring that the application satisfies the business requirements. He/she should help business understand what is required and if possible; challenge the business to explain why a certain attribute is needed. Is the attribute really worth the implementation cost and what value does it give? Is that a nice or need to have functionality? Finally, the tester should not be afraid of knowledge sharing, as this bring down barriers and open a lot of doors for the tester to gain knowledge.

So, get under the hood and get your hands dirty!

The post Get under the hood and get your hands dirty appeared first on SogetiLabs.

Breaking Bad Data Habits

Par John McIntyre

Why is AI not everywhere?

When I talk to Sogeti clients about AI and Data, they usually tell me that they are “already doing AI” or “we have a Data Science team”.  I hear practically every organisation stating that AI and Data is important.  And I see analyst agencies and research companies talking about how important it is and how organisations which leverage AI and Data do better then the competition.  So why is AI not everywhere in these organisations?  Something is missing, and I think that the following 3 Bad Data Habits account for a lot of the issues.

Habit 1: “We already have Data Scientists and we use AI”

If you have Data Scientists and value their work, why does everyone in an organisation not have access to it?

  1. Most organisations who leverage Data and AI, do it in a tactical way.  A point solution which answers one specific issue.
  2. Data Scientists are actually IT resources or other business resources working as enthusiastic amateurs.  Or they have training but no experience bigger data teams.
  3. Analysis and Reports take a long time to create and share.
  4. Data scientists and Data analysts don’t distribute their work, because they have no platform to do so.

Habit 2: “We need to collect data in a central location”

The thought process goes, if you have data in a central location, your data scientist and analysts will have access to the data easier and as a result, they can do more and better work.  “We need a Data Lake” to improve our use of data.

For many reasons, this is just wrong.  I am not saying that you should not do this but adding new technology will not solve the real problems.  A technology solution could contribute to new problems.  Organisations need to know who owns the data.  This is enabled by a Data Culture in an organisation.  A Data Culture, enables organisations to distribute ownership of Data sources and systems, distribute data analysis, share knowledge and most importantly for data to be reliable and trustable.

When organisations add a Data Lake (or database or similar), security and access rights need to be organised, data quality needs to be monitored, data dictionaries need to be created, etc…  And this is something else that IT needs to manage.

Habit 3: “But we are unique, and you don’t understand our business”

There might be something to this, but we have analysed and mostly, this is untrue

  • Experienced data professionals understand data.
  • Data professionals understand data patterns.
  • Data tells its own story.

But domain expertise does help.  However, we use standardised patterns, Data Science can often take advantage of not understanding the domain and simply analysing the data.  And we have also seen similar data patterns across industries (I have examples of similar data flows in Banking Loans and Drug Research and Development, for instance).

How can my organisation take advantage of AI and Data?

Ask yourself these questions:

  • Does my organization have a data culture?
  • Does the whole organisation have easy access to data and simple reports?

If the answer to either question is No, you are not ready for AI and probably don’t understand its value anyway.

The most successful Data Driven Organisations are good at AI and leveraging data because they do the following well:

  • Data Democratization
  • Data Governance
  • Data Security
  • Business Ownership of Data and Data Sources
  • Training
  • and a Companywide strategy driven from senior executives down.

It is these points that help an organization be successful because they are the most basic foundation building blocks.  After all, if you want to see this technology everywhere in your organization, you simply need to enable your organization to use and understand the data advantage.

The post Breaking Bad Data Habits appeared first on SogetiLabs.

Watch “Predictions for 2021” with George Colony

Par Sogeti Labs

2020 was something else… Time Magazine called it even “The worst year ever”. Are you prepared for 2021? We invite you to this in-depth interview with the Founder and CEO of Forrester Research, George Colony. In the conversation George will present his “What Matters Now’ agenda for 2021. Faced with the pandemic, we’ve witnessed that firms did things that once seemed impossible – sometimes overnight. He will explain that much of your success will depend on how quickly and how well you harness technology to both enable your workforce in the new normal and build platforms that differentiate your firm.

The post Watch “Predictions for 2021” with George Colony appeared first on SogetiLabs.

Can a Smart Contracts replace my banking system in future ?

Par Ayan Bhattacharya

In today’s world our traditional business contracts are all done manually by legal teams. There is always a specific set of terms and conditions involved in any contractual agreements, we see a requirement of a financial institute, insurers, legal advisors in between the deliveries and payments for any business contracts. It’s human driven manual approach wherein the contracts are written on the discretion of a person or a team or a specific institutional policy. Although this approach of agreement works in 99% of the cases, it consumes time and adds up indirect cost. These indirect costs are not recoverable. In this blog I will try to highlight a parallel way of approach using blockchain smart contracts. Although it’s quite common in Blockchain world, but let me stress it once more to rethink if we still need our banking systems

Imagine how a vending machine is programmed, how the business model works and how the technology supports the business model. In vending machine, you can buy a product. The system contains a lot of products with the prices. You choose a product and insert the coin or a note. You get the product and the change if any is returned. Suppose if a can of coke costs 1.20 and you insert 2 euros coin, it can either return you 50×1-20×1&10×1 or 50×1-10×3, or any combination of 50, 20, 10, 5, 2 or 1 cents. The logic is pre-determined and depending on existing coins available the system chooses the best condition to return. In case of not availability the money gets returned without the product. Does it require a human? No, does it require a Bank? No. Can you change the input midway through the process? No. The entire process is executed without a human involved. This business model is unique because technically speaking there is no involvement from any third-party during transaction, if the Vending machine is programmed properly

Smart contracts work works in the same way. It’s a program with a fixed set of input and output in a distributed ledger system. Typically, contract is signed, a person or a firm agree on deliverables. For  Smart Contract programs, it starts delivering the results of its own with payments are auto-credited  without the intervention of any financial regulators or institutes. It’s a complete end-end programmed approach. The logic is predetermined, the deliverables are fixed, the output requirements are predictable, and the payments are un-interrupted. It can’t be manipulated ideally to change the outputs once its configured and starts operating for a delivery.

Smart Contracts does the same set of activities in a predetermined way. It’s a program which does the work of a lawyer, insurer and Bank. It ensures absolute transparency.  The usage is limited but it has a potential to impact a lot of industry in aviation, telecommunication, banking, insurance, education and health. If precision is required in delivery, smart contracts are the ideal way to move ahead. There may be some limitations also worth mentioning- Most of the countries are yet to approve transactions which are non-regulated by central banks, Smart contracts are smart because of the developer who write the code. To make sure that the delivery and payment works hand in hand, it must be perfect. Bugs and Errors in the code these can have serious monetary implications. In reality requirements vary from time to time and customer to customer, it has limited contracts reusability

Ethereum which is similar to Bitcoin is used in Smart Contracts. Ether contains a series of cryptographic public records linked together with date, time, changes details stamp. In a distributed ledger it becomes almost impossible to change an ether without approval from all other nodes.

Cryptocurrencies are gaining popularity and self-regulated it doesn’t need a Central Bank to approve. This possibly makes it one of the greatest disrupters for future if allowed to use. Not using a Banking system can’t enforce me to pay taxes. It’s a different thought or maybe I am thinking of creating a parallel banking system globally. What do you think ?

The post Can a Smart Contracts replace my banking system in future ? appeared first on SogetiLabs.

The changing face of InsurTech in Commercial Property

Par Gopikrishna Aravindan

Grace Hopper, American Computer Scientist is said to have famously said:

‘The most dangerous phrase in our language is – we have always done it this way.’

Interestingly her team is credited with popularizing the term “debugging” for fixing computer glitches; inspired by an actual moth removed from a computer relay.

Commercial Property insurance has been largely driven by players with massive risk appetite. Although insurance industry accounts for more than 3%*(see footnote) of the US GDP, it is perceived as slow to adopt new technologies. Compared to ecommerce or travel industry, traditional insurance carriers have been not as quick to leap frog from traditional systems. While popular apps like amazon, tripadvisor, yelp etc. have been rapidly deploying new technologies to retain existing & capture new customer base, legacy applications continue to drive the bread and butter for large insurers.

However, last decade has seen arrival of smaller players who through directly or by collaborating with large players have transformed the policy admin journey. This article attempts to capture some of the leading insurtech players in the policy admin commercial property value chain:

Trov is a cloud based direct to consumer app that offers micro duration policies for specific items. This app is a big shift from the traditional insurance model where the touch point happens once a year to buy a new policy or renew an existing policy. The flexibility in the duration and coverage terms is expected to attract more customers. During the Onset of COVID, Trov offered coverage for last mile delivery that allowed organizations to deploy drivers rapidly for home delivery.

Cytora enables underwriters to arrive at faster underwriting decisions by leveraging its data APIs that pull risk information related to the prospect – mostly company data. Additionally, the product also provides AI capabilities that allows Underwriters to estimate accurate loss predictions based on engineering data and determine whether the insured has the appetite to assume the risk much earlier in the underwriting process.

Small and Mid-Size Markets (SME) are volume driven and today’s digital world presents massive data about the enterprises. DFP’s AI solution enables hand design elements that allows UW to incorporate manual inputs in addition to vast data scrubbed and analyzed by its algorithms on bound as well as prospect information.

Instanda is looking to capitalize the increasing product customization and lower distribution needs by providing an end-to-end platform spanning digital engagement, distribution and underwriting. The platform aims to provide a no-code capability that will enable business users to rapidly deploy underwriting solutions based on changing product configuration, thereby removing dependency on technology teams and proliferating time to market.

Hope you enjoyed reading this article. Are there any promising InsurTech stories that you would like to share?

*https://www.statista.com/statistics/1040495/property-casualty-direct-insurance-market-size-usa/

The post The changing face of InsurTech in Commercial Property appeared first on SogetiLabs.

Azure Purview

Par Alberto Alonso Marcos

We create the new resource group

We create the Purview resource

One of the first actions I need to take is assign a role for Purview in the created storage accounts.

In our case

From that moment on, we already have the option to scan our data source. We note that the options on the left side of our Purview console have increased.

Once we have enabled the read role in our data sources, we can proceed to work with them through Azure Purview. We will start with our Azure Data Lake Gen2. Click on Register and select the resource from the set on the right.

And we register our resource, creating a collection called Azure-Synapse-Workshop.

We proceed to register it

Completed

Once registered, we proceed to perform a scan. To do this, we click on the AzureBlob target.

The Purview engine executes the process, connecting to the source and showing the different folders that exist in it. Click on continue

We select the scan rule. In our case, as we have not created any additional ones, we will work with the one that exists by default.

We can even program the scan frequency.

In this opportunity, we will set it as a single occasion.

We do a check

And we proceed. Now we can only wait for the results

Completed

We see that you have successfully scanned the resource and found three assets, but none of them have information identified as classified.

After this step, we are going to register another resource, in this case Azure Synapse

By linking the resource to the same collection, we see that it is included just below the Azure Blob.

IMPORTANT: In the case of Azure Synapse it is a bit more laborious than in the previous one. Here we must, on the one hand, have the SQL Pool running and also through TSQL we must create the permissions for our Azure Purview

Let’s how. The first thing is to open the Azure Synapse Workspace

We have it

We see all the tables and proceed to execute the scan

We see the result

As in Azure Synapse we have a table with customer information, let’s see how it looks.

To show us the lineage of the data, we must use Data Factory and

I create a new database, and a Data Factory pipeline that replicates the one previously created in Azure Synapse.

In order to view the server in Purview, we must add the read permissions and add permissions in the database
To do this, we must create a user in the Active Directory

Include the role to that user

Reset your password, for this you have to enter with this username and your temporary password to reset it and that will be the one used to connect with SSMS
And through SSMS connect with Active Directory – Password to be able to execute the script below.

CREATE USER [purviewaa] FROM EXTERNAL PROVIDER
GO

EXEC sp_addrolemember ‘db_owner’, [purviewaa]
GO

NOTE:
I did the same with the rest of the data sources I was working with. For example with the SQL Pool of Azure Synapse

And even with the Azure Blob Storage account

And our Azure Synapse

Include details in datasets
In the case of experts or owners

In the case of the classification, we observe that the tool has made a first classification, but we have the possibility to modify it and even increase it. We will see

Now we complete the set

Being that way

Creating a Glosary

Connection with Data Factory

To then be able to see the Lineage

The post Azure Purview appeared first on SogetiLabs.

WHAT’S WRONG WITH THAT LIGHT BULB?

Par Tuomas Peurakoski

Hi all and a happy new year.

I spent most of my Christmas holidays playing two new games, namely Watch Dogs: Legion and Cyberpunk 2077. They both seem to have a distinctive theme.

While Cyberpunk 2077 is set in the future it does have similar hacking mechanics as does Watch Dogs: Legion (which in turn is set at “five minutes from now”). Both of the games rely heavily on using hacked security cameras to scope out whatever is going on in a classified building and the best results come from stealthily stealing data.

It’s all good fun and a bit fantastical, right? The smart phones we have clearly do not work like magic wands and we can’t hack inside every government facility.

However, as the year changed we saw the invasion of the Capitol and pretty much after that I saw this tweet:

My heart goes out to the unsung IT heroes at the Capitol tonight. My guess is they’ve never had to run asset inventory IR before – a daunting, stressful task in a tabletop exercise – and they’re running one (prob w/o a playbook) following a full on assault of the Capitol.

— socially distant, mask wearing bat (@mzbat) January 7, 2021

“My heart goes out to the unsung IT heroes at the Capitol tonight. My guess is they’ve never had to run asset inventory IR before – a daunting, stressful task in a tabletop exercise – and they’re running one (prob w/o a playbook) following a full on assault of the Capitol.”

– @mzbat

Riiiiight. All of a sudden they had a messy situation inside so anyone who had even an inkling of stealth could have marched to one of the unlocked computers and stick in a thumb drive that could send a worm into the system. Or switch one of the light bulbs into a hacked one. Or anything.

We do live in a Black Mirror society nowadays. The amount of things that have to be assumed compromised is simply staggering. It’s implausible that we can hack huge drones with our smartphones and just ride them to steal data from the network from afar. It is a fact that if someone can get a physical access to a computer they can get in. Let’s not forget how Edward Snowden managed to smuggle data out.

Years back I heard a story about the legendary hacker Kevin Mitnick. I’m not sure if it is true but let’s entertain the thought anyway.

Mitnick had boasted to a bank that he would infiltrate their systems at a certain time on a certain day. The IT security in the bank locked out any external use during that time and thought they had beaten Mitnick. At the specific time Mitnick called them and told them he was in their system.

But what did he actually do?

He appeared in overalls carrying a ladder into the reception area of the bank and told that he was alerted to fix the light bulb in a manager’s room. He was let in there and having physical access to the manager’s desktop computer he got in easily.

So the lesson to be learned here is that while there might be funny masks and loud noises that draw our attention there might be something inconspicuous happening in the background.

And as tech people this is a risk we must always be aware of.

Have a safe year, everyone.

The post WHAT’S WRONG WITH THAT LIGHT BULB? appeared first on SogetiLabs.

Aviation Disasters and the IT World

Par Richard Fall

I have a fondness for watching documentaries about aviation disasters.

Now, before you judge me as someone with a psychological disorder–we all slow down when we see an accident on the highway, but planes crashing into each other or the ground?–let me explain why I watch these depressing films and what it has to do with IT work.

I should start by noting that, as a private pilot, I have a direct interest in why aviation accidents happen. Learning from others’ mistakes is an important part of staying safe up there.

Ever see a car accident happen and find yourself compelled to Google what happened? Dr. Mayer says this is also our survival instincts at work. “This acts as a preventive mechanism to give us information on the dangers to avoid and to flee from,” he says.

But, then, there’s another reason I watch the documentaries that’s only recently become clear to me: seeing how mistakes are made in a domain where mistakes can kill can can be generalized to understand how some mistakes can be avoided in other domain where, while the results might be less catastrophic to human life, are still of high concern.

In my case, and likely in anyone’s case who is reading this, that’s the domain of IT work.

The most important fact I take away from the aviation disaster stories is that disasters are rarely the result of a single mistake but result from a chain of mistakes, any one of which if caught would have prevented the negative outcome.

Let me give an example one such case and see how we, as IT professionals, might learn from it.

On the night of 1 July 2002, Bashkirian Airlines Flight 2937, a Tupolev Tu-154 passenger jet, and DHL Flight 611, a Boeing 757 cargo jet, collided in mid-air over Überlingen, a southern German town on Lake Constance, near the Swiss border. All 69 passengers and crew aboard the Tupolev and both crew members of the Boeing were killed.[4]

On the night of July 1, 2002, two aircraft collided over Überlingen, Germany, resulting in the death of 71 people onboard the two aircraft.

The accident investigation that followed determined that the following chain of events led to the disaster:

  • The Air Traffic Controller in charge of the safety of both planes was overloaded as the result of the temporary departure of another controller in the center.
  • An optical collision warning system was out of service for maintenance but the controller had not been informed of this.
  • A phone system used by controllers to coordinate with other ATC centers had been taken down for service during his shift.
  • A change to the TCAS (Traffic Collision Avoidance Systems) on both aircraft that would have helped–and which was derived from a similar accidents months earlier–had not yet been implemented.
  • The training manuals for both airplanes provided confusing information about whether TCAS or the ATC’s instructions should take priority if they conflicted.
  • Another change to TCAS, which would have informed the controller of the conflict between their instructions and TCAS instructions was not yet deployed.

Many issues led to the disaster (which thankfully, have been resolved as of today)–but the important thing to note is that if any one of these issues had not arisen, the accident would likely not have happened.

That being true, what can we learn from this?

I would argue that, in each case, the “system” of air traffic control, airplane systems design, and crew training taken as individual items, each could have recognized that each issue could lead to a disaster and should have been dealt with in a timely manner. This is true even though each issue by itself could have been (and probably was) dismissed as being of little important by itself.

In other words, having a mindset that any single issue should be addressed as soon as possible without detailed analysis of how it could contribute to a negative outcome might have made all the difference here.

And here is where I think we can apply some lessons from this accident, and many others, to our work on IT projects.

We should always assume that if, absent evidence to the contrary, a single issue during a project could result in negative implications that are not immediately obvious, it should be addressed and remediated as soon as practicable.

The difficult part of implementing this advice clearly results from questioning whether a single issue could affect the entire project, and the cost of immediate remediation vs. its cost. There is not an easy answer to this–I tend to believe that unless there is a strong argument showing why a single event cannot become part of a failure chain, then it becomes something that should be fixed now. Alternatively if the cost of immediate remediation is seen as less than the cost of failure, then the issue can be safely put aside–but not ignored–for the time being.

To put this into perspective in our line of work:

Let’s imagine a system to be delivered that provides web-based consumer access to a catalog of items.

Let’s further imagine that the following are true:

  • The catalog data is loaded into the system database using a CSV export of data from another system of ancient vintage.
  • Some of the data imported goes into text fields.
  • Those text fields are directly used by the services layer.
  • Some of those text fields determine specific execution paths through the service layer code.
  • That service code assumes the execution paths can be completely specified at design time.
  • The UI layer is designed assuming that delivery of catalog data for display will be “browser safe”–i.e., no characters that will not display as intended.

This is a simple example, and over-constrained, but I think you can see where this is going.

If the source system has data, to be placed in the target system text fields. has characters that are not properly handled by the services layer and/or the UI layer, bad actions are likely to result.

For instance, some older systems permit the use of text documents produced in MSWord that promote raw single- and double-quote characters to “curly versions” and take the resulting Unicode data in raw form. Downstream this might result in failure within the service layer or improper display in the UI layer.

Most of us, as experienced IT professionals, would likely never let this happen. We would sanitize the data at some point in the process, and/or provide protections in the service/UI layers to prevent such data from producing unacceptable outcomes.

But, for a moment, I want you to think of this as less than an argument for “defense in depth” programming. I want you to think of it as taking each step of the process outlined above as a separate item without knowing how each builds to the ultimate, undesirable outcome, and deciding to mitigate it on the basis of the simple possibility that it might cause a problem.

For example, if the engineer responsible for coding the CSV import process says “the likelihood of having problems with bad data can be ignored or taken care of in the services layer”, my suggested answer would be “you cannot be sure of that, and if we cannot be sure it won’t happen, you need to code against it”.

And, I would give the same answer to the services layer engineer who says “the CSV process will deal with any such issues”. You need to code against it.

It may sound like I’m simply suggesting that “defensive coding” is a good idea–and it is. But–and perhaps the example given is too easy–I would argue that the general idea I am suggesting is that you need to have a mindset that removes each and every item in a possible failure chain without knowing, for certain, that it could be a problem.

This suggestion is not without its drawbacks, and I would encourage you to provide your thoughts, pro or con, in the comments section of this blog.

In the meantime, I’ll be over here watching another disaster documentary….

The post Aviation Disasters and the IT World appeared first on SogetiLabs.

Key Principles for a Successful DevOps Culture

Par Ankur Jain

In this article, we are going to learn some of the basic principles of DevOps. But Before that, Lets Learn.

What is DevOps?

DevOps can be defined as a culture or process or practice within an organization that increases communication, collaboration, and integration of the Development (which includes the QA team) and the Operations (IT Operations) teams. The aim is to automate and speed up the software delivery process much more frequently and reliably.

To know in much more about basics of DevOps click on this link.

Main principles of DevOps

Incremental : In DevOps, we aim to incrementally release software to production. We need to do releases to production more often than the Waterfall approach of one large release.

Automated : To enable users to make releases more often, we automate the operations from Code Check in to the deployment in Production.

Collaborative : DevOps is not the only responsibility of the Operations team. It is a collaborative effort of Dev, Release, QA and DevOps teams.

Iterative : DevOps is based on Iterative principle of using a process that is repeatable. But with each iteration, we aim to make the process more efficient and better.

Self-Service : In DevOps, we automate things and give self-service options to other teams so that they are empowered to deliver the work in their domain.

To know in details about all DevOps Tools click on this link.

The post Key Principles for a Successful DevOps Culture appeared first on SogetiLabs.

When your estate extends beyond what the eye can see

Par Balaji Rajagopalan

Once upon a time, kings and ministers were wealthy based on the kingdom, the size of produce and the taxes they could levy on it. Wars were fought to expand their geographical kingdoms, winners got more land and could ride through their estate, patrol it and even guard it against opposing armies. In retrospect, those were the good times. You could physically see your estate, measure it and protect it. Flash Forward to 2020 – your estate consists of continuously churning data hubs and your super large data warehouse in the cloud, your income is dependent on how you protect, mine and extract value out of this precious commodity. Your security is woefully inadequate if you built it in the 1970s based on mainframe architecture with technology which didn’t realize the potential threats which internet could throw upon you.

How do you protect what you cannot see?

Today you can neither see your data (physically the servers and their storages) nor your attackers. Hackers are constantly looking to exploit gaps in IT systems, applications and hardware. Cyber-threats are becoming more common, with serious IT breach making headlines every other day. Hackers are well equipped and the cloud + democratization of the internet has created a level playing field (even though it’s an unfortunate thing) for the large corporates vs the hackers. Today’s 20 something hacker is as assured and confident; armed with sophisticated tools, supported by an unseen, unknown army/network of mercenaries (or goes solo) who exploit every single chink in your armory.

A plethora of cyber security products are floating in the Cyber Security Market that organizations can choose, but which is the most dependable, genuine and cost efficient outlier? Before we get to the conclusion of this condescending question you need to understand what threatens the company and what it takes to stop the cyber-attacks. Once you’re aware of these concepts you’ll see why SIEM is a need for organizations in today’s world 

Are you prepared for an obscure invasion?

Let’s take you back to the good old days into ancient history and war tactics, until the introduction of modern machinery, animals have played an often-decisive role in warfare. In the book Beasts of War: The Militarization of Animals, author Jared Eglan curated amazing insights into how militaries have used a stunning menagerie of animals in combat. Dogs and horses were probably the first animals used in war, and many are still used today in modern military and police tasks.

Bears appear a few times in the history of warfare, one bear in particular became famous for his exploits against the Germans during World War II. On the contrary to win a war you need something humongous and deathly, allow me to introduce you to War Elephants. Often times a dynasty’s strength was determined by how many war elephants the king owned. War elephants can be compared with modern day fully equipped tanks, they were heavily armored and had massive amount of weapons in their arsenal. They had castle like structure on the back for soldiers, a mahout to guide them, War Elephants themselves had longs daggers and swords sometimes several feet long attached to their tusks

Unlike War Elephants in today’s high-tech world we have SIEM which stands for Security Information and Event Management, SIEM is the War Elephant that will keep your Cyber Security team on top of the security in real time. It is a system that is used to detect, prevent and resolve all cyber-attacks while centralizing all the security events from every device within a network.

A significant feature of SIEM is to gather all raw security logs/data from organization’s firewall, access points, server and other devices, categorizing and analyzing security alerts in real time.

Why do you need a SIEM?

Advance cyber threats are going to be prominent in 2021 and beyond. The revolutionizing disruption in IT is both a blessing and a potential curse. Old School tactics of using firewalls and antivirus software is outdated. Your IDS & IPS won’t be able to detect malwares & threats that comes in attachment, banner ads and malicious websites which can gain access to your network through an internal device Organizations should be prepared for all the challenges of cyber security and strengthened their foundation to cope with diverse cyber threats like AI –driven attacks, IoT attacks, Social engineering, Insider threats, Phishing, new cyber regulations etc.

Introducing Azure Sentinel

Microsoft’s Azure Sentinel, A Cloud native SIEM service with built-in AI for analytics. It removes the cost and complexity of achieving the central and focused near real-time view of the active threats in your environment. And just like any other service in Azure, the service scales automatically to your needs. Azure Sentinel works by correlating the security logs and signals from all sources across your apps, services, infrastructure, networks and users, whether they reside on-premises in Azure or any other cloud. The built in AI leverages Microsoft threat intelligence that analyzes trillions of signals every day. And its machine learning models refined through decades of security experience filter

through the noise from alerts, drilling into it analyzing thousands of anomalous events, to return a view of threats that really require your attention. For example, here in the overview dashboard,

It gives you bird’s eye perspective of the events going on in your environment.

By now you may have realized multiple reasons as to why you might need a system as efficient as a SIEM to manage your security. With Sogeti’s Cloud Security Expert and Azure Sentinel you will have a team of experts to give you the daily services of an experienced and knowledgeable support team. and a reliable product that will detect attacks inside and out, and that reports threats accurately without producing false-positives.

Are you interested in securing your data from potential threats? Drop a note in comments for more details.

Co-author of this article: Arif Mujawar

Arif Mujawar is a Business Analyst | Sogeti – Automation & AI.

Process Automation & Cyber Security aficionado, experienced in understanding stakeholder and business requirements, transforming data and creating visualizations.

The post When your estate extends beyond what the eye can see appeared first on SogetiLabs.

Top 5 SogetiLabs blogs from December 2020

Par Sogeti Labs

Take a look at our most read and shared blog posts from December 2020.

Azure DevOps, Visual Studio, GitFlow, and other techniques from the heap

In essence, I am very curious. That is why, whenever I see, read or hear about something that piques my interest, I do not stop until I understand it and if it finally provides me with a benefit, I do my best to include it in my portfolio. This is something that happened to me, a while ago, with DevOps.

How to monitor your pipelines using Azure Data Factory Analytics from Microsoft Azure Cloud

Don’t be caught off guard! Never lose focus and check the details

One of the most important aspects of designing a good ELT solution is being able to control its performance. It is of little use to have the systems working if you are not able to know if they are doing it correctly or, if on the contrary, they are causing failures in them. That is why tools such as Azure Data Factory Analytics are increasingly necessary.

Knock me down, I get back up

Sometimes it’s hard to make sense of everything going on around you. There can be so much information and so little time to get to grips with it all. That’s certainly how it felt to me as we put together the latest World Quality Report (WQR) from Capgemini and Sogeti, in partnership with Micro Focus, during a time of unprecedented change and uncertainty. 

Kubernetes: The new cloud control plane?

Kubernetes is a platform that allows containerized workloads to be managed, scaled and automated. As the ecosystem is growing more and more use cases become available. The architecture of Kubernetes allows for nearly limitless customization and extension of its function.

2021: How games will inspire innovation for collaboration tools

More than 50 percent of the business trips and 30 percent of the days in the office are gone forever, according to Bill Gates. With him, many trend studies confirm the same future scenario. You don’t have to be a great predictor to declare 2021 the year in which rapid developments and associated investments in the home workplace environment are rampant.

The post Top 5 SogetiLabs blogs from December 2020 appeared first on SogetiLabs.

Focus areas for enhancing Live Chat and scaling up with Bots (Part 1)

Par Thomas Wesseling

live-chat-keyboard

In a series of blogposts I’ll be sharing some learnings with regards to enhancing live chat, introducing conversational AI and scaling up with a bot at a banking organization. Most of these learnings would apply as well for other industries where investments are made to enhance customer service channels and live chat in specific. Let’s start with a short introduction and some context.

We are living in a 24×7 economy. People are used at doing online transactions at any time, any place and on any device.  Looking i.e. at a banking organization customers are by default not visiting branch offices anymore. At the same time there is still a need for personal contact for all kinds of matters. Customers can be in touch through phone, but other contact channels have appeared over time, like i.e., live chat, video calling and social media.

Customers are increasingly diverting to online channels and this has accelerated even more with the COVID 19 pandemic restrictions in place. Customer support organizations have been adapting constantly serve the customers in several ways. At the same time employees have diverted to working remotely. All this has resulted in a heavier utilization of online contact channels while being under pressure to save costs.

How can organizations keep their service levels up to par and how do we create opportunities for personal contact with customers in this new reality? What are the contributions of Bots, Conversational AI and Intelligent Routing mechanisms to creating new opportunities for personal contact? What should be getting focus when your live chat channel is growing in volume?

Before we dive into topics such as Bots and Conversational AI I will elaborate more on live chat as a contact channel and how it can evolve over time.

Get your live chat foundation in place

As customers are used at doing online transactions 24×7, the expectations are changing towards customers service and live chat as a contact channel. In essence live chat is a synchronous communication channel which allows customers to directly communicate with employees by sending text messages back and forth. On the customer facing side you would at least need a basic chat interface to enter text messages and, on the contact center side, an interface to respond to these messages.

Live chat can be offered through a public environment (where the customer is anonymous) and through a secure environment (where the customer is identifiable, like i.e. after signing in). If live chat is offered on both a public and secure environment, consider options to redirect (and onboard) already existing customers to your secure environment and let them start the conversations from there. As the customer can be identified in a secure context a more personal experience for both the customer and the employee is possible. With permission from the customer, historical data related to earlier interactions should then also be accessible for employees.

live-chat-secure

With the introduction of messenger channels, a chat conversation is changing into a more a-synchronous way of communication as customers are not always expecting an instant response.

For the sake of this series of blog posts I assume that live chat is offered as a synchronous communication channel and that it operates in a secure context. This means that, in most cases, direct communication is possible with an employee or a (chat)bot and that the customer can be identified. Next to that I also assume that a live chat channel is already in place, a certain volume of conversations is handled and your strategy is to grow this into a more mature contact channel.

Keep on enhancing your (live) chat channel

Live chat systems should offer advanced routing and queueing mechanisms to make sure that conversations (initiated by customer) are routed correctly to employees with the right knowledge. Depending on the amount of products and services offered by the organization and the complexity, specific knowledge is required to answer related questions. If self-service options are available employees need to be able to point the customers towards these options or guide them through the process.

Let me elaborate on this a bit more. A chat conversation is always triggered by the user in a specific context. This context can be that the customer is coming from a product or service page inside your app, navigating from a product page or triggering a chat directly from a contact page. All these different sources (pages) are uniquely identifiable by their context variables. The better those context variables can be captured the faster one can determine how to (re) route a chat conversation. To make this more concrete I will give a short example. A chat application triggered on a service page related to investment banking should initially route any questions to a solution group with employees that can handle related questions. Employees assigned to this solution group are trained to answer specific questions related to the topic.

Traditionally chat conversations are limited to text only but when text is insufficient, additional options can be added such as supporting rich media (like i.e. markup and maps), video or audio conferencingfile sharingco-browsing and even document signing.

Whether you are using a standard or custom chat frontend consider options to experiment with elevating live chat conversations to voice/video calls. To support your roadmap, look for opportunities to easily enhance your live chat by integrating with platforms or backend systems for inbound and outbound contact routing, customer tracking (CRM), document sharing and chatbot intelligence.

Customer should eventually have instant access to the best support mechanisms for issues that they experience. Life chat can be the ideal contact channel for further experimentation as it is easily accessible, allows a continues dialogue across multiple channels, offers a rich set of features and options for personalisation. Before we discuss on further enhancements, we first need to get a better understanding on how your live chat is performing. More about this can be read in the next blog post.

Thanks to Emil Wesselink and Chris den Arend for sharing their feedback and suggestions.

Sources

https://www.customercontactweekdigital.com/performance-metrics/whitepapers/dtr-disrupting-the-chat-experience

https://www.callcentrehelper.com/best-practices-to-improve-live-chat-120603.htm

The post Focus areas for enhancing Live Chat and scaling up with Bots (Part 1) appeared first on SogetiLabs.

Training neural networks with movies

Par Tom Van de Ven

What do you get if you put some SogetiLabs fellows together during coffee time?

A conversation that spirals out of control quickly. In this case we got together for a virtual cup of coffee the other day. 

On a tech related conversation thread, we ended up with a list of movies that we think are essential for upbringing. We talked on what movies to educate your kids with. We extended it a bit to the set of movies with tech/nerd value that cannot miss out in the SogetiLabs movie list. 

We thought it interesting to see how you can program neural networks using a set of movies. The impact of programming younger kids’ brains is an easier task than programming the adult brain in this way. I think the readers of this blog may need to watch these movies multiple times for the programming to take effect. 

This set of movies is the first training set for our “neural network”. Next rounds need to involve new movies. Based on feedback of the first set of movies we can look at a second series to steer neural networks into a “SogetiLabs” way of thinking?

I cannot keep away from the feeling we “are on to something here”. Give me feedback to these movies, but above all your suggestions to a second series of movies. I am going to run this set of movies through the neural networks of my own kids (6 and 10) to see what happens. I’ll happily try your suggestions for movies as well.

SogetiLabs movie list for programming your neural network or to watch during these dark days at the end of the year:

MovieNeural network training purpose
The NetTeach a new generation on hacking and the consequences. Some good cyber security info is never a bad thing to have. This movie is quite realistic on this topic.
SneakersSecurity and hacking also goes for this movie. Again, a good way of programming these neural networks on security.
James Bond moviesA 50-year period of movie making. This series shows evolution of movie making and is exciting to watch. It shows the change of speed in a movie over the years but also great to watch what kind of technology was expected in the coming years. We can look back to see what really happened. Other than that: great movies to watch anyway.
SwordfishThe topic of security is a recurring theme in our list (an important topic). With this movie there is some great music involved as well.
Monty Python moviesAbsurdities and strange humor are necessary in educating the next generation. The holy grail and flying circus are essential in any upbringing.
Ghost in the shell (anime)Sci-Fi cannot be missed in a list like this. The remake of this movie is good to look at, but the anime version may be even better to watch.
Hachi: A Dog’s TaleSentimental stories cannot be missed in the list. Variety of emotions are needed to do some good neural network programming. This movie is a good example for that.
Jurassic ParkMore animals and sci-fi mixed with a bit of history. Good series of movies that spans a large amount of time. This makes it interesting to look at evolution of movies as well as a lesson in evolution.
The Breakfast ClubA coming-of-age story in an epic movie. It cannot be missed in this list.
Inside OutWhat is going on in a child’s brain? This Pixar movies shows exactly this. Animations by Pixar are of high quality. With this movie you can compare your thought processes to that of child. How far have you come?
The boy in the striped pajamasFreedom and the absence of war is something that the current generation takes for granted. A lesson in what was can never hurt our neural networks. This movie is great input for that.
Schindler’s ListAdding another classic to the list to show how awful that war period was.

As said, I am very curious to your comments and your suggestions for our second list. Keep your recommendations (and reasons for programming those neural networks) coming!

The post Training neural networks with movies appeared first on SogetiLabs.

Profundizando en Azure Stream Analytics: Windowing Functions

Par Alberto Alonso Marcos

In Advanced Analytics, the possibility of covering data analysis in real time is more in demand. In the case of the Microsoft cloud, we have several tools that allow us to work in that direction. Today we are going to make a brief summary about it, and I will comment on some additional capabilities that are not normally covered such as the importance of defining the type of Windowing, the possibility of building your own UDFs (User Define Functions), as well as the use of functions by default they exist to handle anomaly detection.

In Azure, the current options for working with data in real time are:
 HDInsight with Spark Streaming
 HDInsight with Storm
 Apache Spark in Azure Databricks
 WebJobs
 Azure Functions
 Azure Stream Analytics

Focusing on Stream Analytics, we see that within this umbrella we can work with different components, both as data sources, as storage or visualization parts, in addition to the connection with Azure ML in case we want to progress in the use of predictive models. Comment that as Reference Data there are two possibilities, connect to Blob Storage or Azure SQL Database.

But let’s move on, as I have previously commented, in this article we will talk about Windowing and in subsequent entries we will delve into UDF and anomaly detection. These options allow us to cover aspects that are not usually discussed in Azure Stream Analytics presentations.

What is Windowing?

It is the possibility that Azure offers to cover the requirement of creating subsets of data within the Stream based on the timestamp, with the purpose of performing operations such as COUNT, AVG, etc.

As types of Windowing we have four different possibilities:
 Tumbling Window
 Hopping Window
 Sliding Window
 Session Window
 Snapshot Window
Let’s look at each of the cases to better understand the best use case for each.

Tumbling Window

In this case, a subset is created using the TumblingWindow (‘time unit’, integer) function. Example:

SELECT COUNT (*) FROM Input GROUP BY Tumbling Window (second, 10)

In the diagram we see how the result of the query would look using this first type of Window. We would create a subset of the Stream every 10 seconds, we would count the number of events in the different subsets returning the result. There is no overlap here, so an event only belongs to one window. Single: 3, 1, 2

In this case, an example would be to obtain the temperature averages for a data set that covers five seconds.

Hopping Window

In the second case, the subset that is created is based on the two parameters that are passed to the function. These are the size of the window and in the second case, the starting value of each subset, so that the same event can be found in several windows.

SELECT COUNT (*) FROM Input GROUP BY Hopping Window (second, 10, 5)

In the diagram we see how the result of the query would be. Single: 3, 3, 2, 2, 2

In this case, an example would be to obtain the number of events that occurred in the last 10 seconds, when the new counts should appear every 5 seconds. Hence the windows overlap.

NOTE: What would happen if we passed (Second, 10, 10)? Well, we would get the same window as using the previous Tumbling Window function.

Sliding Window

In the case of Sliding Window, we pass the value of the window size and the time unit to the function, as in the first option. However, the difference between them is that the size of the window is covered from the event that occurred.

SELECT COUNT (*) FROM Input GROUP BY Sliding Window (second, 10)

That is, it goes backwards, so it has at least one event and these events can belong to more than one window. Result: 2, 2, 2, 1

This would be a clear example in case we wanted to obtain the total result of events that occurred in the last 10 seconds.

Session Window

In this case, the values ​​that are passed to the function are those of the unit of measure, as well as the wait time and the value of the maximum size of the window.

SELECT COUNT (*) FROM Input GROUP BY Session Window (second, 5, 10)

In other words, in this case, when an event appears, it begins to count the waiting seconds. If a new event arrives before the expiration, the account is started again. In the example we see that after the second event, the 5 seconds wait is exceeded, which closes the window. As we see this closure occurs at the value of 12, this is because the windows are calculated based on the maximum value of time set. That is, in the value 10, 20, 30, etc. That is why when the window of the second set is closed, it does so when it reaches the value of 30. Result: 2, 5

This would be a clear example in case we wanted to obtain the total result of events that occurred with a difference of 5 seconds between them.

Snapshot Window

In the latter case, the total of events that happen in the same time stamp is obtained. For them the function System. Timestamp () is used

SELECT COUNT (*) FROM Input GROUP BY System. Timestamp ()

This case is the easiest to understand, since it performs a typical grouping of values.

CONCLUSION

The possibility of configuring exactly the values ​​of your windows, as well as defining the behavior you want to use to obtain the expected result, allows us to better handle the potential of Azure Stream Analytics. It allows us to better understand the tools with which we work in Advanced Analytics and provide the user with the correct approach for their developments.

More information in:
https://docs.microsoft.com/es-es/azure/stream-analytics/stream-analytics-window-functions

Sources:
Reza Salehi, “Building Streaming Data Pipelines in Microsoft Azure.” June 2020.
Cover photo: Serpstat at Pexels

The post Profundizando en Azure Stream Analytics: Windowing Functions appeared first on SogetiLabs.

Reduce bugs by making the system easy to use

Par Eva Holmquist

The trend is that systems grow in complexity and that we’re relying on systems of systems to get the needed value. At the same time, we want to deliver new functions to the users faster. This puts pressure on test. The first response is always, we need to automate more. And yes, that’s one part of the answer, but automation isn’t the only way to work more efficiently. The fastest way of getting rid of bugs is before they are made. Of course, we can’t get rid of them completely, but we can work towards less error-prone systems. In this blog post, I’m going to talk about one way we can do that.

Easy to use correctly

A lot of bugs occur because the users use the system differently than the developers imagined. This may be because the users use the system incorrectly. A system should be easy to use correctly, and hard to use in the wrong way. Unfortunately, a lot of systems are rather hard to use …

To use an easy example, a door can open towards the user or away. It’s really irritating when you try to open a door by pushing on it and it doesn’t budge. An easy way to make sure it’s easy to use correctly is to have a handle if the door opens toward you and a flat area if it should be pushed. With such an easy design consideration, most people will use it correctly. Our systems should be just as self-evident.

Another example, from the book Writing Solid Code by Steve Maguire, is a Candy Machine Interface. When you get a craving and just have to buy some candy (believe me, I’ve been there) and run down to a vending machine, you want to get your chosen candy fast. You look up the number, presses the keypad, and watch with horror as the machine delivers something else entirely. Checking, you’ll discover that you pressed the price instead of the number. The example from the book is of an American machine, but I’ve made the same mistake in Sweden. Some vending machines, however, use an alphabetic keypad to choose the candy and on those, I’ve never made a mistake. So, if we ponder the design and figure out these kinds of easy mistakes, we can change the design to make it easy to use correctly.

This is relevant both in how we design the user interaction and in how we design the programming interfaces that our colleagues use. Those of us familiar with the C standard functions, have probably made several mistakes using the function “getchar”, that of course return an integer and not a character … It’s easy to misname in the first iteration. The bad naming tends to linger but to reduce bugs we need to take the time to refactor to make our part of the code easy to use correctly, both by users and by our fellow developers.

This part of reducing bugs is about understanding how other people think and adjust the system to make it easy to use. But of course, that is only part of the solution …

Efficient testing is most efficient when done before the development. Making our systems easy to use correctly and hard to use in the wrong way is an important step to reduce bugs in our systems. There are a lot of examples of systems that work in the opposite way.

What is your favourite example?

The post Reduce bugs by making the system easy to use appeared first on SogetiLabs.

Getting to know WSL2 – Linux on Windows

Par Edwin van der Thiel

These weeks at end of year are the perfect time to test out all the new goodies that have come our way over the last year. As an avid (15 year) Linux enthusiast I spent the last couple of days getting Linux up and running on Windows the modern way – through the fresh WSL2 (Windows Subsystem for Linux) delivered a couple of months back.

And here are my tips and take-aways.

Why would we want it?

Ok I’ll do this one first, quick. Why would we want a Linux subsystem in Windows anyway?

First, I’m still a developer. I develop in .Net core, NodeJS, Python, frontend projects and the occasional blockchain. Build scripts are notoriously more work if you need to support Windows, even when you convince Windows users to compile with bash. A simple example: even this doesn’t work on Windows:
[ -f “$FILE” ] | doSomething

Second, using docker. Personally, I have never started a Windows container. And to be honest, I can’t think of a reason to start. And to get access to all the Linux container goodness from the Docker repository you need a Linux host.

Understanding the hypervisor

To understand why WSL2 is nice to use, let’s look at what was there before:

  • WSL1: a “Linux-compatible kernel interface” developed by Microsoft. It could be used to run programs on top of it, but nowhere near the native experience.
  • Hyper-V: The Windows hypervisor for running virtual machines. Of course, you could run any VM here, including the Linux ones. It was available in the enterprise versions of Windows, not the Home edition.
  • Docker for Windows: The tools for running containers. You have to choose either Windows or Linux containers, this selects the host. The host for Linux containers is a VM in Hyper-V.

Now fast-forward to WSL2, the most apparent changes that have been made are:

  • Hyper-V has been split up. The hypervisor – knows as the “Virtual Machine Platform” – is available on all versions of Windows, including Home. The Hyper-V manager is still Enterprise-only.
  • WSL2 is a scalable system, you can manage one or more VMs directly in your hypervisor. This gives you access to different Linux systems simultaneously. It is side-by-side to Hyper-V manager if you happen to install both.
  • Docker for Windows can now run on a VM within WSL2 instead of Hyper-V, making Linux containers available to Home users as well.

Installation

Make sure you install the basics. You can go through step 1 – 5 of the Manual Installation Steps.

Accessing the correct VM

Now as I said, you can have more than one system available. One thing to understand is that Docker for Windows needs to have the default machine set to “docker-desktop”. My default installation looks like:

I installed the Ubuntu distro myself; the others were created. Now if you want to start the WSL command line every time within your own VM, change the link:

Note: it took me quite some time to figure out I was in the wrong machine, trying to “sudo apt-get update” within docker-desktop…

Choosing the correct distro

Not necessarily a topic for WSL2, but if you just want to use it you might get stuck choosing the right one from step 6 of the manual installation steps. Hopefully this helps a bit if you’re not familiar with the different Linux distributions (aka distros) out there:

  • Redhat: An enterprise distribution of Linux which uses RPM as package manager, due to licensing this is not directly available on WSL2.
  • Debian: An enterprise distribution of Linux which uses DPKG / APT as package manager, open and free to use.
  • Suse: An enterprise distribution of Linux developed by SUSE.
  • Fedora: The client version of Redhat, also using RPM as package manager. It is free to use.
  • Ubuntu: Originally split off from Debian as a client version, so they could deliver more recent versions of packages, trading stability risk for functionality.
  • Alpine: A minimal version of Linux, used as basis of many Docker containers.

In general Ubuntu is a very popular choice, especially since you’re most likely setting up a client system where you will want most recent versions available. However, you can install them side by side if you want.

Get started

Now why did we set it up? For me it was having all my Windows goodness with IDE and tools I’m so used to combined with a Linux system for building portable applications and having access to my code from both sides.

So, you need to know that your Windows drives are mounted in your Linux VM already. They are available in your default mount location, in the case of Ubuntu at least it’s:

Since the bulk of time I’ll be working in git repositories, I created a symbolic link so my repos directory is always available as /repos:

And finally

As a closing statement I would like to mention that this was my least painful experience with getting Linux up and running on Windows so far. It genuinely seems like Microsoft is embracing it, and I wouldn’t be surprised if Windows will become one more Linux Distro after all…

The post Getting to know WSL2 – Linux on Windows appeared first on SogetiLabs.

Quality engineering and the world of AI/ML

Par Parinita Patankar

As an industry, it took us a while to move from Quality Assurance to Quality engineering. However, the further enhancements in Quality engineering are happening at a very fast pace.

Even today we know, Testing is ~30% of overall project cost. Despite bringing in the automation aspects and the new age tools into play, we still struggle to deliver quality product at an optimized cost.

Here the word “OPTIMIZE” is of importance. As a practitioner, I believe there are multiple usage of AI /ML form bringing OPTIMIZATION aspects in testing.

Let me take a few examples:

Not even getting to automation and complexities of NLP (Natural language processors etc), the most simplistic area to optimize is the existing set of test repositories. We often see in second or third generation outsourcing deals, clients carrying a huge test suite; manual most of the times.

We can bring the use case of similarity analytics to reduce the redundancy in the test suite and optimize it to have more unique test cases that test a specific path or functionality. When used in the right way, it can trim down the test suite size and thereby the efforts required to maintain and execute. In my experience when used with right algorithms, it can easily optimize the suites ~10-15%.

The similar concept then can be extended to Defects de-duplication and reduce the time and effort spent on overall defect management (triaging, RCA, resolution and retesting). Extending this example – it can also be leveraged to find de-duplication on automaton suites.

For many years we have been speaking and implementing shift left techniques to build the quality of the systems and in my opinion its high time we start focusing on shift right. A very simple example would be to perform “usage based testing” or “targeted testing”. We often either undertest or over test the systems which results into defect leakage and high cost.

This is another high potential area to get the AI/ML aspects. How can we analyse and learn the production patterns and compare it with the tests that are being performed in the test environment? This kind of analysis will give a clear picture on the e2e scenarios that are missed in the testing or the % coverage of certain areas. Usage based testing is not only effective but also high on user experience and especially critical when you must do best in limited time.

However, as a technology group, we always focus on functional automation to speed up the things. With advancements in tooling, we have achieved higher level or test automation but in reality we still find ~30-40% of test automation needs babysitting. Auto healing, self-adapting automation is the need and definitely there are huge list of tools that helps with AI in automation – Autonom IQ,  Functionize, Mabl, Parasoft SOAtest etc. But most of them still focusses on the automation of functional test cases.

While Continuous testing is important, a clear focus on other areas to implement and leverage AI/ML can improve the overall benefits.

Simple usage of AI/ML to find the testing focus areas based on high defect prone areas, or prioritizing high defect yielding test cases upfront in the test cycles could add more value from an end to end solution than focusing on functional test automation alone.

The post Quality engineering and the world of AI/ML appeared first on SogetiLabs.

Generalist or specialist?

Par Alberto Alonso Marcos

A couple of weeks ago I bought Marcos Álvarez’s book, “Leading with OKR” and although I have not been able to advance as much as I would have liked, I have managed to take some notes and learn a concept that, until now, I had not heard before and I found it very interesting.

It is included in the fourth chapter, Agile Collaborators and more specifically in the section with the title Generalists vs. Specialists. In it, the author talks about a concept introduced in 2009 by Tim Brown, CEO of one of the world’s leading Innovation companies, IDEO. It is about T-shaped person or in Spanish, people of type T.

The reason for the use of the T comes from the identification of the two lines that make up the letter with the individual’s own skills. That is, the vertical line corresponds to the level of experience in a given field (specialization). While the horizontal line is linked to interest in all those other things that surround it. Thus, in the book, the definition used is the following: “Type T people are specialists in their field and have high empathy that helps them to see and imagine problems from other perspectives and to contribute their opinion in areas in which they priori they are not experts. ” In addition, he describes them as enthusiastic, curious, and with a strong drive to learn and collaborate.

That is why, at times like the current one, organizations must identify those type T people. They are valuable business assets, since their drive and vision are an Up that cannot be missed. They are the agile collaborator to include in your project.

Although it is true that after several searches about T-shaped person, I have seen that there are other types. They are the pi-shaped or the comb-shaped, which in the end come to add vertical lines on which one is an expert.

This shows that it is increasingly important, in the face of employability, to adopt a broader approach to the so-called “Generalist vs. Specialist”. Which undoubtedly “forces” to be oriented towards learning and continuous improvement. In sectors such as technology, this approach is especially important, since change and evolution is so rapid that, in a short period of time, you can become obsolete.

Now that the end of the year is coming and it is a good time to sit down and think about the goals for the next few years. Doing an exercise in self-criticism and defining our own career plan is undoubtedly good practice. For this reason, I invite you to delve into methodologies such as OKR to help you achieve those objectives, making them accessible through the key results you describe, and lead you to meet them in order to achieve all your challenges.

Toast to your new challenges


The post Generalist or specialist? appeared first on SogetiLabs.

Automated Communication Service: Using Power Automate Connector

Par Prashanth Hamse Vishwanath

Description

In today’s Digital world, there is always a need for Smart Communications which facilitates business to deliver smarter email communication across the entire lifecycle—empowering them to succeed in today’s digital-focused and customer-driven world, at the same time meeting the expectation of simplifying processes and operating more efficiently. 

Communication services are used by various business for digital operations to schedule email communication during Virtual Events, Group Mailers and Internal/External Communications. The need is to trigger communication to multiple recipients with pre-defined email layouts and during pre-defined intervals.

This tool utilizes SharePoint Online and Power Platform to enable business to send communications via scheduler.

The solution is built using Power Automate which is part of Power Platform provided by Microsoft as SaaS (Software as a Service) model. Once configures all communication workflows are auto triggered based on the defined condition.

Workflow – Auto Configurable Communication Service

Implementation Details:

Conclusion

Power Automate can be leverage for solutions like the above. It is important to always design based on configurability and reusability.

Co-author of this article: Nilesh Gupte

Nilesh is a Senior Manager in Sogeti India, Microsoft Practice. He serves as SharePoint Architect. He has helped clients to digitize their workplace and has participated in Discovery analysis, migration, custom development, designing and architecting projects in the Collaboration domain. Nilesh is TOGAF 9 Certified and has 14+ years of experience in Consulting, Solution Architecture, Development. He has worked with, Energy and Utilities, CMT and retail industries.

The post Automated Communication Service: Using Power Automate Connector appeared first on SogetiLabs.

❌