FreshRSS

🔒
❌ À propos de FreshRSS
Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Hier — 6 août 2020SogetiLabs

“Show me the Money$” – Budgeting for Devops and ROI

Par Parul Mathur

When an organization chooses to adopt the DevOps transformation, the traditional ways of budgeting and financing needs to be radically changed as well. The annual budgeting, which requires executives to predict and forecast at the start of the year, is basically flawed. The technique of allocating and blocking funds for 12 months straight in this era of disruption is as old as Methuselah. To top it all, once budgets are set the CEOs spend a lot of time in reconciliation between what’s needed today and what was predicted, clubbed with organizational politics, personal agendas, power-play decisions which all adds to the complexity.

As I have discussed in my last article, DevOps transformation would require a considerable change in an organization’s working model, including budgeting. Let’s look at some of the popular budgeting models being used in high performing organizations be it startups or large enterprises –

I. Shorter Budgetary cycles and Control gates

  1. This model still uses traditional annual budgeting cycle but has review cycles or Control gates which is done quarterly on a rolling wave basis.
  2. Plans and allocations are jointly reviewed based on the funds allocated annually and prioritization decisions made for that quarter.
  3. Benefits include – lesser financial risk, improved business visibility on dynamic market demands, value-driven delivery by forcing teams to think for Minimum Viable Product (MVP).
  4. Challenges of this model are – Annual budgeting is project-based and thus funding cannot be reallocated across projects forcing execs to still wait for the annual cycle. If the overall process of approvals is not kept simple, the quarterly review process would create a climate of constant budgeting where managers are spending more time racing for funds rather than doing actual work.

II. Product or Feature Based Budgeting

  1. This model delegates the budget decisions to the product owner who can prioritize as per the assessment of the product portfolio. The primary product owner will liaise with the business to decide on Go/NoGo activities.
  2. He can further allocate the budget to projects within his portfolio or across products. The product owner has to keep the focus on the MVP considering the market trends, customer requirements, and prioritization.
  3. Benefits include – simplicity and empowerment to the frontline who can decide where to invest and where not to. It influences go-to-market and product reliability too by able to respond to changing market trends and demands to increase the Business Agility.
  4. The challenges of this model are – requires more product-based thinking. Teams focus on including the latest jazzy features and ignoring maintainability and technical debt. Funding for maintenance of the product/s should also be considered.

III. Innovation Based Budgeting

  1. This model is based on Innovation based products that are measured against end-user product engagement, customer satisfaction, perceived value to name a few. Based on the effectiveness of the product, additional funds are allocated to teams.
  2. Benefits include – a lot of fresh ideas flow for innovation and organizations have an option to choose what might work. It also encourages good ideas to quickly flow to customers improving customer-centricity. 1Customer-centric businesses generate greater profits, increased employee engagement, and more satisfied customers.
  3. Challenges include – to perceive and predict the effectiveness of the product which requires continuous evaluation of risk, value, growth, and potential ROI of the product.

When organizations and technology leaders evaluate whether to undertake a technology transformation initiative with a focus on continuous improvement, the first ask is about Return on investment (ROI). Afterall it’s all business!!  Initially measuring the impact of a process as far-reaching and transformational as DevOps, within the context of ROI may seem impossible, but now a lot of organizations have started attempting to put volumetrics and quantification to it.

When calculating the returns, organizations have two categories under which they can evaluate the ROI – Value Driven and Cost Driven approach.

Value driven approach would take precedence over Cost driven approach when it comes to calculating returns. Value driven factors like Value Gained from Unnecessary Rework Avoided per Year or Potential Value Added from reinvestment in New Features, will enable organizations to respond to market pressures quickly. Value cost can also include opportunity cost like opportunity lost by not adding additional features in a timely manner.

It cannot be a single number, it would be always classified as qualitative assessments and quantitative matrix. For example, cultural change can be accessed through conversations and assessments but it’s difficult to quantify. On the other-hand the amount of time, and therefore money, spent and lost on unnecessary rework each year is a significant hit to productivity and the technical economy. Since it represents costs which can be saved by avoiding unnecessary work each year, it’s quantifiable and can be improved by adopting DevOps processes.

Cost Driven Approach would primarily focus on increased savings which are realized by implementing DevOps. Cost-saving by implementing Automation, in place of manual processes or time and cost saved by implementing new technology as a solution are efficiency-based savings. These can be realized by continuous improvement and implementing LEAN DevOps practices. For example – Costs saved by reducing Downtime over a year would not only showcase business resiliency but significant savings in terms of infrastructure support costs by reducing chances of failure and able to restore service quickly.

ROI projections for a portfolio, would have short-term as well long-term returns.

Short terms gains – from Competency building like trainings, self-learning, hackathons; People investment through – rewards, recognition to motivate and retain high performers; Reducing rework efforts, will lead to highly satisfied employees. Retaining existing employees is more cost effective, preserves organization knowledge and gives additional advantage by having strong technical force which is continuously learning.  However only have a cost-centric approach is insufficient and would not yield results. Only focusing on cost savings indicates to technical staff that they will be automated out of a job rather than being liberated from drudge work to better drive business growth, which has additional negative effects on morale and productivity. Striking the right balance is a must.

Long term gains – efficiencies that are realized in the first year “no longer count” beyond year two as the organization adjusts to a new baseline of costs and performance. Long terms benefits gained from DevOps implementation like reduced throughput due to new ways of working, improved CAPEX through more services hosted in CLOUD, reduced revenue leakage and improved wallet share through new business opportunities with existing and new customers, improved customer experience, etc should be measured for a duration of a year. After which they must be re-baselined as per the budget considerations as well as market trends for business agility.

Point to note is with other organizational initiatives ongoing it won’t be right to attribute entire ROI to pure DevOps initiatives. But companies who are middle or low performers in terms of DevOps implementation have the maximum to gain through various value and cost-driven approaches. By continuously burning down technical debt, improving processes, savings in time, and cost will encourage them to work and make progress towards operational efficiency.

References:

1 https://www.scaledagileframework.com/customer-centricity/

2 https://puppet.com/blog/devops-solves-business-problems-gene-kims-top-aha-moments/

3 http://www.academicjournal.in/download/725/2-4-241-900.pdf

The post “Show me the Money$” – Budgeting for Devops and ROI appeared first on SogetiLabs.

À partir d’avant-hierSogetiLabs

Presentation: RPA+AI Fabric

Par Siddhesh Sawant

AI Fabric is the brain-child of UiPath. AI Fabric connects seamlessly with UiPath studio and Orchestrator to create a unified experience within all UiPath Platforms. Through AI Fabric, both the human workforce and intelligent RPA work together to create AI Robots to scale the automation capabilities in the organization. Through this tool, users developing automations can better orchestrate all functionalities of AI; deploying, consuming, managing and improving machine learning models. AI Robots are responsible for bringing Machine Learning (ML) into an organizations business processes.

Download the presentation to learn more.

The post Presentation: RPA+AI Fabric appeared first on SogetiLabs.

The history of the definition of testing

Par Rik Marselis

Today IT delivery teams are convinced that quality is a very important focus. Over the last decades, quality engineering has evolved out of the narrower activity of testing. This evolution is reflected in the changing definitions of testing over the years. In this blog, you will make the journey along with these definitions and see my brief analysis of this evolution.

The first definition of testing that I know of is in the book “The art of software testing” by Glenford Myers which was published in 1979.

He defines: “Testing is the process of executing a program with the intent of finding errors.”

The idea back then was: find the errors, solve the problems and you will have perfect software. But in the 1980’s software quickly grew so complex that it became clear this approach was not feasible.

The first book in the TMAP series, “Testing according to TMap” (1995) aimed at making testing a structured process and introduced this definition: “Testing is a process of planning, preparation and measuring, aimed at establishing the characteristics of an information system and demonstrating the difference between the actual and the required status.”

The idea was: an information system is never exactly what was required, we will investigate it and tell where it doesn’t match, so someone can fix it.

With this definition the focus still is that IT systems have problems that need to be fixed.

In the middle of the “zeroes” two new definitions of testing were published, one in the TMap NEXT book and the other in the ISTQB glossary.

The book “TMap NEXT” defines in 2006: “Testing is a process that provides insight into, and advice on, quality and the related risks.

Very short and straightforward and with a specific reference to quality and the quality risks. This is the information you need, to decide if you want to start using the product. By the way, in this definition apparently the product can be anything, it is not limited to IT products.

The focus now is on measuring quality to provide valuable information.

In 2007 ISTQB published the following definition that still is unchanged to date:

Testing is the process consisting of all lifecycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects.

In this very lengthy definition, the starting point is that the test object is good, (“demonstrate they are fit for purpose”), but at the end this definition still states testing looks for faults (just like Glenford Myers already put in his definition).
Unfortunately, ISTQB has no direct reference to quality. But they do refer to both verification (satisfy specified requirements) and validation (fit for purpose).

On 17 March 2020 we launched the latest book in the TMAP body of knowledge, titled “Quality for DevOps teams”. In this book the definition of testing is:

Testing consists of verification, validation and exploration activities that provide information about the quality and the related risks, to establish the level of confidence that a test object will be able to deliver the pursued business value.

This definition has many elements from previous definitions, it is a result of an evolution. New is the extension with exploration (mainly to explore unexpected behavior and unforeseen possibilities). Also new is the emphasis on providing information to stakeholders so they can establish their confidence in business value, because we are taking the pursued business value as the starting point for testing.

This definition also aims to convey the notion that sometimes mediocre quality at the right moment is better in generating business value than having the highest quality too late.

Over the years we have seen that testing (and in a broader sense quality engineering) made the journey from “find faults and fix them” to “determine whether it is good enough to generate business value”.

Good luck and have fun testing!!

The post The history of the definition of testing appeared first on SogetiLabs.

Green IT – Adding Sustainability to Your Operational Excellence

Par Mathis Hammel

Did you know that more than 4% of the environmental footprint of humanity originates from information and communication technologies, which is twice as much as air traffic?

In the current state of global climatic emergency, major efforts must be made in the IT sector to reduce humanity’s unsustainable environmental footprint, but how do we help without hindering performance and operational capacity?

The origins of the IT carbon footprint

Although the environmental impact of IT has many effects such as deforestation and toxic materials, our main focus in this article will be greenhouse gas emissions which are the main contributors of global warming. The most common of them is carbon dioxide (CO2), but some other gases we emit have much stronger effects. One such gas is CHF3, used primarily in the semiconductor industry, whose global warming potential is 14800 times that of carbon dioxide. A simple way to measure the effect of all greenhouse gases in a common referential is to use carbon dioxide equivalent (CO2e), where stronger and weaker effects in various gases are weighted accordingly. In terms of CO2e, a kilogram of CHF3 released in the atmosphere is then equivalent to 14800 kilograms of CO2.

IT equipments can be split into three categories, which all generate pollution during their production, use, then decommission :

  • Data centers
  • Telecommunication networks
  • Terminals : computers, tablets, smartphones, …

The diagram below shows how the environmental impact is distributed between those categories during their life cycle :

Note that the end-of-life stage isn’t counted in this chart due to a scarcity of reliable global data.

As you can see, around half of the global CO2e emissions come solely from the production of electronics, and the large majority overall is due to home electronics.

The distribution of emissions during usage is more balanced than in production, but it’s not the case all the time: for some usages (such as video streaming on a smartphone) it’s estimated that the CO2e generated by the network and datacenter can be more than 200 times higher than the sole emissions generated by the terminal’s power consumption.

But how do I help?

As individuals, we all have a role to play for a world of greener IT. Here are a few suggestions of what you or your company can change towards sustainability:

  1. Consume less

This one is an obvious option, but it goes a very long way. Only buying electronic devices that you consider essential and renewing your hardware less often, are the key to reducing greenhouse gas emissions caused by manufacturing in addition to considerable financial savings.

At company-level, the Shift Project estimates that renewing corporate laptops every 5 years instead of 3 can reduce your carbon impact of terminals by 37%. The same amount of reduction can be achieved by making half of your employees owning a professional smartphone use the same device for personal and professional use.

  1. Be aware and raise awareness

The first step of change towards sustainability is realizing there’s a problem. Before diving into writing this article, I would never have expected the environmental impact of technology to be so shockingly high. By making others aware and being aware of the effects your consumption can have, you’re already making change by turning every decision of purchase or usage into a conscious act.

Without becoming a die-hard activist of green computing, here are a few things you could try to initiate for better awareness of the IT environmental footprint :

  • Before launching a project which involves terminals and/or cloud computing, taking a moment to measure its environmental implications, and planning concrete actions to make them as small as possible. One of our clients in France even has a requirement to have this analysis performed in each tender bid, which I think is remarkable and can help improve a company’s image by a lot.
  • Pushing your company to create a green IT dashboard which tracks environmental KPIs along with their CO2e : number of terminals replaced each week, live power consumption of cloud and network equipments, number of emails sent each day, etc.
  • Sharing this article with your colleagues, friends and family (wink wink)
  1. Reducing the energy usage of your applications

If you work near the fantastic world of software development or computing infrastructure, you may have the opportunity to reduce directly the power consumption of your applications and hardware: implanting a datacenter in a colder region to reduce the need for artificial cooling, optimizing your algorithms to run faster with fewer resources, moving to green electricity, … Currently, most of the work is done at datacenter-level, and people tend to forget that application is what ultimately consumes energy. Here at Sogeti, we are embracing this new horizon of optimization with several initiatives such as Green Testing which is available to our clients.

You will also notice that most of the times a sustainable system is also less costly, so that’s one more argument in favor of green IT! 

Thanks for reading and sharing this article. You have all my contact info below, I’d be glad to have fruitful discussions about the topic and give more advice to anyone who needs it.

Sources and recommended resources:

The post Green IT – Adding Sustainability to Your Operational Excellence appeared first on SogetiLabs.

Home automation and thoughts on future work at home

Par Dominique Colonna

My eyes open, a soft light gradually invades the room, it’s time to get up.

Still that damn headache.

I feel like coffee.

From my bed, I open the shutters and check the weather forecast: very nice weather today, too bad we have to work! I think about my agenda, it appears: 2 meetings this morning and a client presentation this afternoon with Dan. Well, he’s not connected yet, sleeping hard… Just Oliver and some members of the group.

The smell of coffee now invades my room, and soon afterwards I hear the roll of Bodi, my clever companion, bringing me a well-packed espresso. I get up and swallow it in one gulp and go to the bathroom. It’s hot and I think of fresh water. The jets immediately spray me with micro-droplets that refresh my body. I think of a soft warm breeze. The spin-dryer launches, projecting a draft of air, which dries me out in a few seconds.

This headache doesn’t go away. I take a pill and think I need to make an appointment with my neurologist. The signal’s still too strong or it must have moved again, maybe my fall down the stairs last week. The appointment is confirmed for Thursday.

I’m sitting in my home office new ergonomic chair and think about my wall screen, it turns on. I check my emails and dictate some answers. I think again about the nice weather outside, at the beach… my holiday photos appear on the screen, I really need to concentrate on my work. Especially not doing this in the middle of a client meeting!

The first meeting is starting, I’m already connected. Let’s get started. This neural implant is really fabulous.

The post Home automation and thoughts on future work at home appeared first on SogetiLabs.

Webinar: An Introduction to Quantum Computing

Par Michiel Boreel

We face an exploding demand for data storage and processing, while innovation in conventional computer technologies is breaking down. The bottom line is, for broad AI, complex weather forecasts, and other applications, we will need new types of computer systems.

Quantum computing is one of the most promising next generation of computers. Leveraging the physics at the level of atoms, quantum computers allow for order of magnitude performance gains. By performing massively parallel calculations, previously intractable calculations on conventional systems, become possible within days. Many use cases are anticipated which remain not feasible with classical computers including gene therapy, drug simulation, aerodynamic modelling, supply chain optimization and financial modelling.

Because quantum computers are fundamentally different than classical systems, development tools, middleware, and other parts of the software stack must be carefully adjusted.

This webinar provides an introduction to quantum technology. Michiel Boreel and Julian van Velzen touch upon the differences between quantum and classical systems, quantum applications, and steps to get quantum ready. No prior knowledge of quantum mechanics is required.

About Julian van Velzen:

Julian is an enthusiastic big data engineer with a strong background in computational physics. As the group SME on the next generation of computer systems, he takes clients on a journey into the exciting era of quantum computing

The post Webinar: An Introduction to Quantum Computing appeared first on SogetiLabs.

Reframe technology’s purpose and strengthen organizational alignment

Par Léon de Bakker

Get the organization behind your message: technology shapes positive impact

Help your organization to embrace technology as an indespensible tool for positive impact. Reframe technology’s purpose and reach out beyond tech. Which is to say, build bridges.

Reading time: 3 minutes.

Technology is an indispensable tool to deliver and improve on your organizational positive impact. But not necessarily the first tool that comes to most minds. More likely candidates are material use and recycling, supply chain’s footprint, working conditions and CO2 compensation. You can put technology on that same stage. To achieve this, it’s important to reframe technology’s purpose and strengthen organizational alignment

This post is part of a series about technology and organizational positive impact

  1. A guide to organizational positive impact, tech edition
  2. Doing good drives profitability
  3. Major events and big statements
  4. Be consistent about your positive impact
  5. A strong purpose proposition requires agility and resilience
  6. Agility and organizational complexity, beware of the present
  7. How to create positive impact with technology? Foster critical thinking
  8. Technology as an accelerator for positive impact
  9. Assess how technology shapes your organizational positive impact
  10. Reframe technology’s purpose and strengthen organizational alignment

The human approach

If you want to put technology on an organizational stage, people outside of tech need to understand its potential. So, you could talk about technology. But that might not reach your audience. My advice is to be aspirational. Turn your message into a compelling story.

This is a tweet from Tim Cook, CEO of Apple Inc., to celebrate National Teacher Day 2020 in the middle of the Covid-19 crisis. “Today we celebrate teachers everywhere, like Jodie from Coppell Middle School East in Texas, who are imagining new ways to keep their students engaged and learning during these challenging times. Thank you for your compassions, creativity and tireless work!”.

To me that is more engaging than “We provide a cross-platform, multi-cloud solution. Our DNS service is latency-aware, ensuring that users have a fast experience when using applications. In addition, we guarantee a high level of security by protecting encryption keys within Hardware Security Modules designed and validated to government standards for secure key management. Our containerized applications are easy to scale and maintain and makes efficient use of available system resources”.

Forging alliances outside tech

To reach and collaborate beyond technology, you need to shift from a technical to an organizational perspective. Listen, connect, partner-up and find common grounds. Make sure you are heard too. Craft your story. What does technology have to offer? Show outcomes. Be specific and make it personal. Your audience needs to feel you are talking to them.

Introduce past successes that are the result of applying technology. And introduce present-day concerns that can seriously hurt your organization. Make your partnership the hero that produce strengths and overcome concerns. For examples of successes and concerns, use the insights gathered during the assessment described in the previous post. Limit your examples and only use those that strongly appeal to your audience.

It takes time and effort to forge alliances. Therefore, make it part of your routine to build bridges between tech and non-tech. Low-hanging fruit is existing crossovers such as the cross-functional team that performed the quick-scan and assessment, discussed in the previous post. Also, identify other existing cross-functional teams that are successful. The team members and their management are likely to understand the power of technology and be ambassadors for its contribution to your organizational positive impact. Especially now that technology has proven to be indispensable to the continuity of business during the Covid-19 crisis.

Wrap-up

This is the last post in a series about technology and organizational positive impact. It’s paramount that technology is up to standards, to be highly beneficial to the positive impact and the bottom-line of your organization. More importantly, the real lever is the organization itself. Outstanding technology only shines if the organization is willing and able to get behind a long-term commitment in a resilient and agile way. You’ll find my posts and more at SogetiLabs.


Tips

Acknowledge and address the world outside technology.

If you want to move beyond technology and include organizational perspectives, then technology itself needs to move beyond technology. This includes different behavior and mindset for tech-people. From the CIO to management to operations. You need to be an ambassador. Acknowledge and address the world outside technology. Incorporate more than facts and features. Because reasons, facts and logic are only half the story. You need to appeal to emotions and intuition.

Create space for technology on a C-suite level and make your actions part of your organization’s strategic roadmap. Such as a business model canvas. I’ve mentioned the canvas in the second post in the series, “Doing good drives profitability”.

Intuition, emotions and storytelling

For facts about the role of intuition, emotions and storytelling in a business environment.

Metrics on sustainability

If you want to reach beyond technology, you could use metrics that are known outside of tech. For further reading on metrics on sustainability. Sustainability Accounting Standards Board, SASB Toward Common Metrics and Consistent Reporting of Sustainable Value Creation. An Economic Forum paper prepared in collaboration with EY, KPMG, Deloitte, PWC.  

The post Reframe technology’s purpose and strengthen organizational alignment appeared first on SogetiLabs.

Why you should not care about responsibility

Par Edwin van der Thiel

I come from the old world, from a time where everything was planned, written down, standardized. The world of Service Level Agreements. Of protocols, of liability, of blame, of tickets. And even though everyone always tried to account for any eventuality in advance, there was always one constant:

Sh… Stuff happens.

I’m in the business of helping customers deliver new solutions, and in general, help them set up or improve their agility /DevOps-ness. A couple of weeks ago at a customer we had a meeting and were discussing some incidents that happened, how it occurred, how to deal with it in the future. One pervasive issue that kept popping up was where the incident was supposed to end up, who had to solve it.
And before I even knew it, this was my response to the discussion:

Who cares?

Trying our best, we could not find someone to blame. Reason for this: no one was to blame. Everyone tries their hardest to have a proper working system. The actual point that needs to be addressed – the customer has an issue – needs to be solved. Rather than looking at where you should direct the problem, ask yourself this question:

How can I help?

To me this is the essence of DevOps: everything works.

  • If something doesn’t work, fix it.
  • If something doesn’t work good enough, improve it.
  • If something is missing, build it.
  • You don’t know how: learn it.

In general, if there is a problem, do not try to find the ‘correct’ location to drop it. Instead, cooperate to improve.

  • If that Open Source library has a bug, go fix and make a Pull Request.
  • If that component is a single point of failure, make it redundant.
  • If your data center has one internet provider, go to the cloud.
  • If your cloud provider doesn’t offer sufficient availability, implement multi-cloud redundancy.
  • If the planet might get destroyed by a solar flare, launch a backup system to another planet.

You should not care about availability, nor bugs, nor dependencies.
Treat every occurrence as an opportunity to improve and do it together.

The post Why you should not care about responsibility appeared first on SogetiLabs.

Don’t test with production data

Par Bart Vanparys

Many believe that testing is not representative when we have not tested with production data. It’s hard to build confidence in a solution if people have not seen it behave correctly with actual production data. But testing with production data is often a bad idea.

Let’s first explore a major driver for testing with production data. This originates from the belief that many dangers are hiding in production and we cannot simulate those conditions with synthetic data. Using actual production data with all its quirks and edge cases will reveal issues that we could not imagine. Right?

But that’s not a good reason.

Firstly, this implies that we would learn only at a very late stage that our solution is not designed to handle the variation of data that happens in production. It’s good that our testing finds this. But it’s too late. Identifying those cases should happen very early in our development track. It is highly recommended to explore our production data thoroughly and then base our design on a solid understanding of actual production data.

Secondly, with how much production data would you need to test to cover all these cases. After all, many of these special data variations only occur sporadically. Will we find them if we take one day of production data? One week? One month? Before you know it, we have to test with extensive data volumes, only to have relative certainty that our solution can handle all special cases.

This brings us to another important point. Production data is yesterdays data. It’s in the past. It tells us what transactions our system handled in the past. Of course, it is important to ensure continuity. We need to test on regression. But the usage of our solution changes over time. So does our solution itself (or the services that we interface with). But we need to be confident that our solution handles tomorrows data well. Testing with yesterday’s production data gives us a false confidence. So we’ll need new or updated data to cover also tomorrow’s data patterns.

There are also practical concerns to testing with production data:

  • Size: the larger our test data sets, the more time that is required for test execution. We should aim for the smallest possible test data sets that still provides sufficient test coverage. Especially as we aim for continuous (automated) testing that provides us fast feedback (we prefer minutes over hours, hours over days)
  • Control: if we use production data, we must really understand how much variation is actually covered. So, we must know our data, which requires time to explore.
  • Security & privacy concerns: although there are many excellent tooling solutions for these concerns, it deserves caution as the potential impact of breaches is significant

So why do we still use production data to test?

Often, it’s the simplest solution. The data is there so let’s use it. Creating synthetic data requires analysis and effort. It requires a good understanding of your data, how they connect, how they might change, what could be the edge cases…

On the other hand, these are exactly the reasons to use synthetic data. The additional effort pays itself back in test coverage, risk reduction and additional confidence. Often, we can construct very small data sets which provides a much higher coverage than using weeks of production data. Small test data sets mean fast test execution. This increases our agility and speed of learning.

This however also means that synthetic test data need to be good. They should not be based on a theoretical expectation about our solution. They should be based on sound analysis of our data & information architecture and complemented by good exploration of actual production data. If this analysis is not done properly, we will lose confidence.

This brings us to a last point. Humans are human. We will not believe that something works based on a theoretical exercise (which is how synthetic data is sometimes perceived). We only believe that it works, if we have seen it working with data that is recognizable to us. Especially business users require this confidence. There is nothing wrong with that. We can complement our synthetic data with (anonymized) production data sets. Especially during demonstrations and (user) acceptance testing, show the solution with production data. If it helps people sleep better, why not.

But let’s not rely anymore on only production data for actual testing when synthetic test data can help us maximize test coverage, speed up feedback loops and reduce the security & privacy concerns.

Synthetic data need to be good. Don’t just use the obvious theoretical cases. Do your homework & analysis.

The post Don’t test with production data appeared first on SogetiLabs.

Predict the future with Science Fiction

Par Tom Van de Ven

Ilium by Dan Simmons is a great example of a SF-novel with no end. Without giving away too much of the plot the story ranges from the Greek gods to quantum teleportation and different versions of Earth and Mars. While reading this book I dreamt away to future civilization. Here on earth on short term timelines the book as way over the top. On the other hand, it helps to imagine a complete over the top situation for building a future outlook. It can help you gain insight into what new business you should invest in or what service can be a success. You might get the first ideas for the design of a new version of an existing product.

Sci-fi thinking with near infinite data transfer speed

Let me give you an example of how this works if you combine science fiction thinking with this day and age events that might influence the future. Recently I saw the news item on a new speed record on data transfer. Apparently, it is possible to transfer data with a speed of 44.2 Terabits per second. This is not only in a laboratory setting but a real field trial using over 75 kilometers of optic fiber cable. This is part sci-fi and part real. What if we push this a bit? Let’s create a world where this speed of data is common and wireless data communication is in order of magnitude of 100Gb/s. All our products around us would change. Even the way we work with all these products would change. With this as a starting point, we can continue our thought experiment.

With these kinds of data transfer, no product would need local storage anymore. Aside from connection stability, all data would be available instantly. Products can become smaller, more energy-efficient, and much smarter. All data in the world is available everywhere. We can create a series of supercomputers around the world that do all the calculations for us and we would only need receiving devices that can give instructions to this global intelligence and data source. For example, the concept of smart cities would be “easy” to implement in this way. No more difficult data collection and edge computing are needed.

Going over-the-top with brain copies in space

Take this one step further and using a bit of the story of the book Ilium where I started this story. We can store all our data and our supercomputers in space. Create a series of satellites that have huge amounts of capacity (can be distributed over smaller satellites like Elon Musk’s Starlink project). Next to that we have supercomputers in space, so we got our cooling system sorted. Energy is of course of the solar kind. With all that in place we would have everything in place to make data and computing power available anywhere in the world.

If you would take this further steps, you can dream of creating a copy of the brain in one of those space storage units. It might even create some form of artificial superintelligence in this way.

Sci-fi-to-extreme lets you see a glimpse of the future

Maybe I am getting a bit carried away here. Back to the starting point of taking a situation and sci-fi it to extremes. It can help you to see a bit in the future. It can help you to come up with new ideas, new services, new products. Sci-fi has proven to be a good source of predicting the future ( Star Treks “flip-phone”, Iron Mans flying suit, The Jetsons smart watch, fully electric submarines by Jules Verne or antidepressants described already in “a brave new world”). The funny thing is that we can name even more sci-fi situations that never happened. We could also say they have not YET happened J.

Use sci-fi thinking to free your mind a bit. To step out of the impossibilities that are too easy to put out there. It might get you a great product! If not, you at least had fun in the process.

What is your sci-fi vision?

Let me close this story with the highly recommend book Ilium that started it all for me. In the book, it is possible to quantum teleport throughout the galaxy. Let me know what your trail of thought looks like when this is a reality!

The post Predict the future with Science Fiction appeared first on SogetiLabs.

Bad Workmen

Par Alistair Gerrard

The saying goes that a bad workman blames his tools. And if you’re like me then you’ve probably used a screwdriver to tighten or loosen a screw. Or to open a tin of paint. Or as some sort of chisel. Or as a tyre lever. Especially those really big screwdrivers. And when the big screwdriver doesn’t work as a chisel or a paint can opener or a tyre lever, you may blame it using the most colourful of terms. I do.

If you haven’t then I tip my hat to you, and you are probably much better at various DIY tasks than I am. A testament to this can be easily assessed, usually, by a quick visible inspection of any of my projects. Or by the various injuries I have sustained in the process – thankfully all relatively minor to date.

Tasks which went, as far as I am concerned, particularly well are few and far between. One was my disassembly and reassembly of a fitted wardrobe, all to access plugs hidden by the original installation, and to install a TV stand. The better one was the design, build and installation of custom shelves which spanned the gap under a bridging wardrobe, and which provided an integral reading light on each side. I was dead chuffed with those.

I’m proud of that particular job because I understand the scope of the work properly upfront, allowing me to better (under) estimate the effort required. I also carefully planned each section and I knew I had the right tool to complete each part. OR to put it another way, I didn’t just use one of those cool, big screwdrivers to do everything. I used the correct tool for each of the jobs required. Despite owning one of those cool, big screwdrivers. And, more importantly, I didn’t blame the cool, big screwdriver when I was slightly misusing it. I was, for once, a good workman.

Many other tasks have been merely adequate because of my impatience, driven from an under-appreciation of the size of the task, and a lack of training with the specific tools required to complete the task.

There are parallels which can be struck between my quite frankly chequered DIY history, and situations we encounter all too often on projects. And as per my opening paragraph I have recently witnessed complaints about tooling within a project, whereby the tooling was seen as a weak link because, in effect, the project was using a cool, big screwdriver as a chisel.

Tools are designed to do specific jobs. And even in the case of a multi-tool, it only does multiple jobs because it has many parts, each designed to do a specific job. But there is more to this issue that that. Rationally I know I have little recourse to swear at a cool, big screwdriver when it does not behave like a chisel. I also can’t really blame it for failing to work on a small philips screw as I repair my glasses. I have to maintain a reasonable expectation that the tool will do the job it is designed to do well, especially if I use it correctly.

It is therefore also unreasonable to blame an out-of-date software tool for not delivering the agile-supporting features you need when you are running a version which is 5 years out-of-date, was tailored to waterfall or v-model delivery methodologies, and was never really configured when you bought it beyond the standard installation, and you’re not even using that configuration correctly.

I’ve found the same to be true with more modern, agile-supporting tools are expected to solve all problems but the default configuration which does not, for example, reflect the strategy defect management strategy, is still in use. It’s unreasonable to blame the tool you haven’t bothered to set up properly for issues with defect management when it has not been configured to your process. It allows users to bypass process and re-introduce issues the process was designed to prevent.

At Sogeti, we agree that agile projects are driven by People, Process, and Tools. Yet we don’t rush to tools as we know that you need the cultural buy-in from people to achieve success, and that means identifying the ways of working those people want to follow, or the processes they need. Only then can you provide those people with the right tools to do their jobs.

Not only do we have well-honed processes to ensure our clients will have the right tools for the job to require but we also equip our consultants for success, to ensure they can support your business and help bring about the change needed for your continuing success.

The post Bad Workmen appeared first on SogetiLabs.

Defining Antifragility

Par Edzo Botjes

Turbulent times ask for resilient organizations

The financial crisis of 2008, the dotcom crisis of 2000 and the current crisis of 2020, all highlight the need for organizations to become resilient or even antifragile to survive (unexpected) external stressors so that they continue to remain significant for their stakeholders. 

————-

So, what is antifragility, and what is its application in organizational design?

In my master thesis titled “Defining Antifragility and the application on Organisation Design,” I have combined research literature on resilience with antifragile attributes, as well as a variety of engineering models to form the Extended Antifragile Attribute List (EAAL) model.

But before we delve deeper into the EAAL model, let’s look at EAAL’s conceptual model.

We all know that the challenge for organizations is to stay relevant in the current Volatile, Uncertain, Complex and Ambiguous (VUCA) world and regain their ‘value’ after being disrupted by a stressor. To deal with the VUCA world, enterprises need to be resilient and agility is one of the tools for achieving it.

To survive all the stressors, being resilient is not enough. There will be stressors that are outside of what a resilient system can absorb. These stressors are called Black Swans. To survive a black swan event, an enterprise needs to be antifragile. Figure 1 below contains the behavior of a system on stress or time resulting in a value.

Resilience and antifragility by Nassim Nicholas Taleb and P. Martin-Breen

(Figure 1)

The EAAL model 

The Extended Antifragile Attribute List (EAAL) model is a summary of the available literature on the attributes of a resilient and antifragile system-of-systems validated by experts and C-level management.

This model is the first step in the design process of a resilient or antifragile organization, and it proved to enable C-level managers to determine the level of resilience (Figure 1) that (parts of) the enterprise needs to have. It also enabled leaders to determine the attributes (Figure 2 below) needed to develop the selected behavior.

(Figure 2)

This study (see below) can be extended to the practitioner’s review by design authorities outside of the organizational domain. However, a follow up study is needed on the causal relationship between the attributes and the behavior of the system.

Note: The content of this blog is a summary based on my Master’s research thesis:

Defining Antifragility and the application on Organization Design – a literature study in the field of antifragility, applied in the context of organization design (available as open access).

Do you want to know more or are you interested in a video call to discuss the topic in detail? Please feel free to contact me. The research is pretty dense, and it would be my pleasure to discuss it with you.

The post Defining Antifragility appeared first on SogetiLabs.

From Tester Sapiens to Tester Optimus: The evolution of testers in the Agile & DevOps era

Par Albert Tort

Darwin stated in 1859 the Evolution Theory as a process by which organisms change over time as a result of changes in heritable physical or behavioral traits in order to better adapt to its environment, with the aim to survive and compete. Following this theory, some philosophers also foresee that humans will derive from Homo Sapiens to Homo Optimus (human-machine).

In the era of Agile and DevOps, testers also need to adapt to new work environments, so new skills and abilities are required to provide more value from the point of view of quality in modern IT delivery approaches. Two main drivers characterize the new role of testing and quality professionals as Quality/Test Engineers in the Agile & DevOps era: (1) The need for T-Shaped quality profiles aimed at pushing continuous testing and quality-driven co-creation in agile teams, and (2) An engineering approach for quality assurance with special focus on Automation (test automation, CI/CD, RPA…) and Advanced Analytics as main accelerators for DevOps environments.

This “natural evolution” adapted to new environments reminds the Evolution Theory. Testing roles are shifting from the Tester Sapiensspecie (traditional functional testers) to Tester Optimus specie (Quality Engineers with focus on quality, automation, optimization and even AI assistance). In other words, the challenge of intelligent (optimus) testing and quality assurance is a must to be addressed, since not everything can be tested with limited resources. The only way to address it is the combination of smart testing to optimize what to test, and technical focus for improving how we test.  This evolution, in our context, is not just a forecast, but a nowadays reality. 

Software quality needs to be conceived with a wide vision: every aspect that is positively perceived by users implies better quality. For sure, the satisfaction of expected functionalities at different levels means quality, but it also implies performance, security, usability, UX,… Therefore, in the context of agile teams, based on co-creation taking the most of different points of view, quality engineers are required to be facilitators (no more controllers) from the very beginning, as key roles in the generation of quality value. This objective require T-Shaped quality profiles. A T-shaped engineer is a professional who has deep knowledge and skills in a particular area of specialization (the vertical part of the T), along with general knowledge and connections across disciplines (the horizontal part of the T). In the context of quality, it means a professional who has a plan to develop deep technical expertise in an specialization (performance, security, usability, UX, automation, analytics,…) and the ability to have an agile minsdet to be a continuous promoter of quality within teams working in DevOps approaches, connecting and facilitating the different pillars of quality by engaging the other roles and taking the most of existing resources. 

The technical component is also important, as testing won’t rely only on functional manual testing anymore. Being an Optimus testermeans pushing testing and quality assurance to the next level by implementing accelerators to push automation, the integration of quality activities in CI/CD pipelines, the automation of other tasks with RPA and the continuous measurement of quality in IT delivery through advanced analytics, which in turn may evolve to artificial intelligence systems to enable anticipation and to support smart automation. Clearly, this requires engineering mindset, skills and abilities. 

No doubt the context evolves, no doubt testing profiles change. So, let’s adapt, evolve, and make the most of the key role of testers and quality engineers in the era of Agile and DevOps.

The post From Tester Sapiens to Tester Optimus: The evolution of testers in the Agile & DevOps era appeared first on SogetiLabs.

Assess how technology shapes your organizational positive impact

Par Léon de Bakker

Take a positive impact perspective

You can help your organization and assess how technology shape your organizational positive impact.

Reading time: 4 minutes.

For your organization to embrace technology as a tool for positive impact, it needs to understand its potential and pitfalls first. You can help and assess how technology shapes your organizational positive impact. The assessment will be more effective if you approach your technology from a positive impact perspective rather than a technology perspective.

This post is part of a series about technology and organizational positive impact

  1. A guide to organizational positive impact, tech edition
  2. Doing good drives profitability
  3. Major events and big statements
  4. Be consistent about your positive impact
  5. A strong purpose proposition requires agility and resilience
  6. Agility and organizational complexity, beware of the present
  7. How to create positive impact with technology? Foster critical thinking
  8. Technology as an accelerator for positive impact
  9. Assess how technology shapes your organizational positive impact
  10. Reframe technology’s purpose and strengthen organizational alignment

A positive impact approach

To understand technology’s strengths and concerns, assess your digital tools through the lens of 5 key focus areas for positive impact. These areas are inclusion, empowerment and collaboration, personal safety, environmental footprint, data and privacy. You can find a description of these areas and their relation to technology at the end of this post.

Step1, quick scan of digital tools

Most likely it will be too big an undertaking to assess all your digital tools in full. Maybe there is a large number of very diverse or complex digital tools. Maybe there is a lack of support, cooperation or resources. Or a lack of experience in the field of impact assessment. You can mitigate some of those impediments by forming alliances, automating and outsourcing testing and by building on existing tests and audits. But even then, you might want to focus on a limited number of digital tools. So, start with a quick scan to select digital tools and to allocate resources.

For a quick scan, assemble a group of cross-functional experts to assess how technology shapes your organizational positive impact. For this, plot your digital tools on two axes: impact on planet and society (positive vs. negative), magnitude of that impact (high vs. low). Make a chart for each of the 5 areas. Ask a facilitation expert to prepare and lead your quick scan session. Limit the duration of the session to half a day or less, aiming to get all required input and results during the session.

Step 2, setting priorities and allocating resources

Use the graphs to prioritize actions and to allocate resources for a deeper assessment of your digital tools. Move the items with a high negative impact to the top of your to-do list. Be aware of interdependencies. Digital tools are connected on a deeper level. Select the full group of connected tools if some of the tools have a high negative impact.

Step 3, deep scan of selected digital tools

The next step is a deep scan of the selected digital tools. By assessing your digital tools from the perspective of the 5 key focus areas, you’ll take a different vantage point. One that will make it easier to connect to non-tech stakeholders. You’ll find more information on the 5 areas at the end of this post.

Step 4, sum-up learning and prioritize actions

The last step is to sum-up learnings and prioritize actions. For this, use the same graph you used for the quick scan. Again, make a chart for each of the 5 areas.

When you present the outcomes of the scan, describe the desired outcome from the perspective of each of the 5 key focus areas. Work your way from the desired outcome to the digital tools assessed and the next steps. This will ensure your narrative will be both aspirational and practical.  

After your assessment, you’ll have insights on what to improve on your digital tools and your organization to accelerate a positive impact. Which is a topic that goes well beyond the boundaries of technology. So, you need to build bridges between tech and non-tech. More on that topic in the last post in these series, “Reframe technology’s purpose and strengthen organizational alignment”. You’ll find my posts and more at SogetiLabs.


Key focus areas to assess how technology shapes your organizational positive impact

There are 5 key focus areas in which technology shapes your organizational positive impact. Below you can find further insights into each of these 5 key focus areas tailored to technology.

  • Inclusion
  • Empowerment and collaboration
  • Personal safety
  • Environmental footprint
  • Data and privacy

Inclusion

Essence
Inclusion is the practice of providing equal opportunities.

Outcome of a strong purpose proposition
Digital tools designed to welcome and to be accessible to a wide group of users including the disabled, illiterate, challenged, and underprivileged. – Digital tools that do not contain nor create biases such as racial, gender, or ideological bias.

Examples of technology

Examples of technology- Mobile apps and websites
– Dashboards and forecasts
– Tests and assessments
– Application, complaint, request forms
– Design and UX
– Automation
– Artificial intelligence/ big data

What to assess
Assess the accessibility of digital tools.
Assess bias in digital tools. Be especially alert if automation is applied. More so in the case of artificial intelligence.  

If bias is present in machine learning/ artificial intelligence, assess the quality of inputs (big data, applied models), the learning models (machine learning), the digital tool itself (code and design).  

If digital tools are insufficiently inclusive, assess considerations, guidelines, processes and actors for buying, designing, deploying, and improving digital tools from an inclusion perspective.

What can be learnt
– Quality of digital tools from an accessibility perspective.
– Priorities, choices and trade-offs regarding accessibility.
– Prejudices and assumptions in data, models, tools and mindsets.
– Processes put in place to identify and correct exclusion.
– Strength of the organization’s culture regarding inclusion.

Empowerment and collaboration

Essence
It’s essential to empower stakeholders if you want to truly collaborate. Empowerment is giving others the means to have an impact. Collaboration is working together based on the premise of mutual benefits, rather than acceding to the other side’s position or imposing your own position. Collaboration is creating something together. By comparison, with reciprocity you give as much as you receive.

Outcome of a strong purpose proposition
A variety of digital tools that bring value to the collaboration by sharing data and advancing interaction.

Examples of technology
– Digital community and marketplace
– Platforms to digitally share data, information and ideas
– Virtual workspaces shared with stakeholders
– E-learning

What to assess
Satisfaction, effect and success of digital tools. If digital tools insufficiently empower and advance collaboration, assess considerations, guidelines, processes and actors for buying, designing, deploying and improving digital tools.

What can be learnt
Opportunities to tailor digital tools to stakeholder needs. Levers for both your organization’s and your stakeholders’ willingness or ability to learn from and collaborate with each other. For instance, perceived benefits, awareness, motivation, enablement, alignment, cultural fit, legal risks, and sensitivities.

Personal safety

Essence
Personal safety is the absence of physical and emotional harm.

Outcome of a strong purpose proposition
Digital tools that protect the physical and emotional wellbeing of your stakeholders.

Examples of technology
– Design and UX
– Applications that describe safety procedures. For instance, when visiting a production plant.
– Features that enhance safety. For instance, a panic button in the Uber app, contactless delivery during the Covid-19 crisis, health features on a smartwatch, automated moderation on interaction platforms.
– Hardware and architecture (reliability and availability).

What to assess
Assess perceived and actual safety. For this, you can use questionnaires and you can use internal reports about harm, incidents and complaints. If digital tools insufficiently shelter your stakeholders from physical or emotional harm, assess considerations, guidelines, processes and actors for selecting, designing, deploying and improving digital tools.

What can be learnt
– Exposure of your stakeholders to harm.
– Effectiveness of continuous learning.
– Boundaries that inhibit change.

Environmental footprint

Essence
An environmental footprint, in the context of this post, is the effect an organization has on the environment.

Outcome of a strong purpose proposition
Digital tools, including their infrastructure and data storage that minimize energy usage and uses renewable energy sources (electricity as a first layer and the source of that electricity as a second layer), absence of harmful materials in hardware, recycling of hardware.

Examples of technology
– Green coding, data storage and data governance.
– Green procurement and recycling of hardware.

What to assess
Amount and source of energy required for your applications and hardware to function. Availability of alternatives. A year’s worth of mail of one person for instance, is roughly equivalent to driving 200 miles in an average car.

The extent to which code and data is used and useful. Saving draft or outdated documents do add-up. Presence of harmful materials in your hardware and the availability of less harmful alternatives.

Level of and approach to recycling hardware. If digital tools exceed an acceptable level of footprint, assess processes, actors, monitoring and learning models on reducing footprint.

What can be learnt
– The footprint of your digital tools and opportunities for improvement.
– Priorities, choices and trade-offs regarding footprint.
– Effectiveness of continuous learning.
– Boundaries that inhibit change.

Data and privacy

Essence
Privacy is the freedom from unauthorized intrusion and concerns the self-determination of both organizations and individuals with whom to share their data.

Outcome of a strong purpose proposition
Keeping data safe when collecting, storing, sharing, using and deleting data. Both private and proprietary data and both your organization’s and your stakeholders’ data.
Examples of private information: names, social security numbers, birth dates, addresses, driver’s license numbers, credit card numbers, opinions, relations, beliefs, memberships.
Examples of proprietary information: views, policies, products and services in development, industry insights, organizational projects, roadmaps, reviews, processes.

Examples of technology
– Policies for and monitoring of information protection and privacy
– Cybersecurity and data governance

What to assess
The extent to which policies are current, accessible and known.

The strength of your cybersecurity and compliance with your organization’s policies and regulatory restrictions. Potential areas to focus on are recent data breaches, their cause, and remedies. Or the policies themselves and their underlying identified risks. Alternatively, you could assess your incident response mechanisms. As a final example, you could learn from reports or fines from regulators.

If data is insufficiently safeguarded, assess processes, actors and the monitoring and learning models on data security and privacy.

What can be learnt
– Level of data protection and privacy.
– Weaknesses and strengths of digital tools and infrastructure regarding data safety and privacy.
– Organizational levers for improving on data safety and privacy.

The post Assess how technology shapes your organizational positive impact appeared first on SogetiLabs.

NLP applied in business

Par Joleen van der Zwan

Paul Verhaar is Lead Data Scientist with Sogeti Netherlands. In this webinar I interviewed him about Natural Language Processing, a specific field of Artifical Intelligence.

About Paul Verhaar

Paul has a passion for Natural Language Processing. He loves complex problems that kindle creativity and out-of-the-box thinking and projects with social impact.

He has a background in linguistics and New Media. Paul is always in for a chat on data science and/or the impact of technology on civilization. In his spare time, he is a pro-musician, avid motorcycle rider and single speed bike builder.

About the NLP webinar

This webinar will take you through the field of Natural Language Processing (NLP) and AI. How close are we to an AI that can understand the text? Natural Language Processing is on the rise after major breakthroughs since late 2019. Where exactly does NLP stand amidst the data science creations in the world of AI? And, how can we leverage these techniques on the ever-growing body of text data? We touched upon what NLP is, in what tech NLP is used, the NLP breakthrough(s), recent trends, future and 2 (Sogeti) applications leveraging state-of-the-art techniques. 

The post NLP applied in business appeared first on SogetiLabs.

Technology as an accelerator for positive impact

Par Léon de Bakker

A telltale for technological and organizational health

Technology can be an accelerator for positive impact. As a tool in it's own right and as a telltale for organizational health.

Reading time: 3 minutes.

Your technology can help you as an accelerator for positive impact. Obviously, in its own right as a tool that can be optimized. And as significant, on a deeper level as an indicator of your organization’s preferences, actions and health. For it’s your organization that has bought, developed, implemented and used technology.

By assessing and improving your technology, you will untap valuable insights on organizational driving forces that reach way beyond the realm of technology.

This post is part of a series about technology and organizational positive impact

  1. A guide to organizational positive impact, tech edition
  2. Doing good drives profitability
  3. Major events and big statements
  4. Be consistent about your positive impact
  5. A strong purpose proposition requires agility and resilience
  6. Agility and organizational complexity, beware of the present
  7. How to create positive impact with technology? Foster critical thinking
  8. Technology as an accelerator for positive impact
  9. Assess how technology shapes your organizational positive impact
  10. Reframe technology’s purpose and strengthen organizational alignment

Technology can make or break your organization

I have written previously about how doing good generates distinct business advantages. And how a negative impact leads to a disadvantage. I also wrote about how technology can be a force of good or bad. Which is a choice, not an inevitability. Consequently, it’s clear to me that technology can make or break your organization.

Especially since technology is omnipresent. From the dashboards and communication tools used in business strategy to expert applications on an operational level. And also, touching all the different business areas such as procurement, HR, legal, customer service, marketing, production and finance. Lastly, technology connects your organization to many of its stakeholders like resellers, suppliers, customers and regulators.

5 key focus areas for positive impact and technology

Given the presence and impact of technology, it’s worth understanding whether your technology landscape is optimized to accelerate your organizational positive impact. I see 5 key focus areas:

  • Inclusion
  • Empowerment and collaboration
  • Personal safety
  • Environmental footprint
  • Data and privacy

Looking through the lens of these 5 key focus areas, you’ll find opportunities for technology to increase positive impact. For instance, to improve the digital community you may have in place to foster and enable stakeholder collaboration. Or to sharpen your software code to minimize calculations thus leading to lower energy consumption. As a final example, to create digital customer journeys that maximize contactless interactions in light of the COVID-19 crisis.

On a deeper level: human interactions and thinking shape your technology landscape

Obviously, it’s possible to change your technology landscape, for technology is a tool. A collection of ones, zeros and hardware. But what good does that do if the underlying and driving human interactions and thinking are left untouched?

Let’s say that there’s an unfortunate bias in your automated recruitment tool. From a technology perspective you want to fix the tool. With a wider lens, you want to understand if the bias is a symptom of prejudices within your organization. This is not necessarily the case. Perhaps there isn’t enough time and resources available to monitor, evaluate and improve vital systems and processes. Or maybe organizational goals are creating a perverse incentive.

Technology as a tell-sign for organizational improvement

You could take that next step and go beyond technology. Apply critical thinking and use a broader lens. Regard technology’s shortcomings as a likely sign of sub-optimal underlying processes, dynamics, actions, mindsets, or priorities. Change those. If you help your organization to improve on a deeper level, you will ensure not just more aligned and optimized technology, but a more aligned and optimized organization. Better equipped to make a positive impact on the planet and society.

The next post, “Assess how technology shapes your organizational positive impact”, will provide a framework to learn about your technology landscape and its underlying dynamics.

You’ll find my posts and more at SogetiLabs.


Tips

Setting the stage

To successfully perform an assessment, you need to set the right conditions such as vision, goal, scope and boundaries, access to resources and people. The technical side of an assessment.

You will also need to address the ethical and human side of an assessment such as creating a safe space. Perceived safety will in part depend on past experiences. So, identify relevant issues with previous questionnaires, investigations, reviews, retrospectives, research and assessments and use your learnings to improve on safety. The technical side of setting-up your assessment is important in this area too. Only accept a well-defined, open process.

Communicate with nuance

An essential part of your assessment will be to connect the dots. There will always be an element of subjectivity in your findings and proposed next steps. Respect that grey area and communicate in a nuanced manner.

Critical thinking

Your assessment will in part dive into organizational dynamics, behavior and mindset. This is the domain of organizational health, learning and development, organizational change. These topics are widely covered in articles, books, podcasts, case-studies and research. So, I’ll focus on the one key-element that is most fundamental to me. Which is the space for critical thinking.

Questions you might ask to get a first rough idea. Is your organization used to give and receive feedback constructively, are lessons learned translated into change, is their space for fruitful discussions? May people differ in opinion and are they still treated openly and with respect? Positive answers to these questions indicate a healthy organization that is capable of learning, improving and changing together.

Deep dive

For organizational health, you might want to read “Organizational health: The ultimate competitive advantage” in the McKinsey Quarterly of June 2011.

For organizational change you could read “A Causal Model of Organizational Performance and Change” written by W. Warner Burke and George H. Litwin in 1992.

The post Technology as an accelerator for positive impact appeared first on SogetiLabs.

Kubernetes Security Basics

Par Ankur Jain

Kubernetes containers and tools empower businesses to computerize numerous parts of application deployment, giving colossal business benefits. Be that as it may, these new deployments are similarly as powerless against assaults and exploits from programmers and insiders as conventional environments, making Kubernetes security a basic part for all arrangements. 

Attacks for ransomware, crypto mining, information stealing, and administration disturbance will keep on being propelled against new container-based virtualized situations in both private and open clouds. To make our application deployments secure we need to follow these steps.

Kubernetes Security Real and Run Time

When containers are running underway, the three basic security vectors for ensuring security to them are network filtering, container investigation, and host security.

Investigate and Secure the Network

  • A container firewall is another sort of system security item which applies customary system security procedures to the new cloud-local Kubernetes environment. There are various ways to deal with securing of container network, making sure with a firewall, including:
  • Layer 3/4 separating, in light of IP locations and ports. This methodology incorporates Kubernetes organize a strategy to refresh administers in a powerful way, securing deployments as they change and scale.
  • Web application firewall (WAF) assault identification can ensure web confronting containers (normally HTTP based applications) utilizing strategies that identify basic assaults, like the usefulness of web application firewalls.
  • Layer-7 container firewall, this firewall with Layer 7 separating and profound bundle assessment traffic secures compartments or containers utilizing system application conventions. Insurance depends on application convention whitelists just as inherent identification of regular system based application attacks, for example, DDoS, DNS, and SQL infusion.

Inspection of Containers
The attacks use benefit accelerations and malignant procedures to complete an attack or spread it. The exploits of vulnerabilities in the Linux (for example, Dirty Cow), bundles, libraries or applications themselves can bring about suspicious movement inside a container.
Examining container procedures and record framework movement and distinguishing suspicious conduct is a basic component of container security. Suspicious procedures, for example, port filtering and reverse shells, or benefit accelerations should all be distinguished. There ought to be a mix of inherent discovery just as a pattern conduct learning process which can distinguish surprising procedures dependent on past activity.

Host Security

  • In the event that the host on which containers run is undermined, a wide range of awful things can occur. These include:
  • Benefit accelerations to root
  • Secrets stealing which are utilized for secure application or to access infrastructure.
  • Changing of group administrator benefits
  • Host asset damage or hijacking (for example crypto mining programming)
  • Halting of basic arrangement device foundation, for example, the API Server or the Docker daemon

Just like containers, the host framework should be observed for these suspicious exercises. Together, the mix of system investigation, container review, and host security offer the most ideal approach to identify kill chain from different vectors.

Open Source Kubernetes Security Tools

Here are some of the security tools to make your deployments secure and attack free.

  • Kubernetes Network Policy
  • Istio
  • Grafeas
  • Clair
  • Kubernetes CIS Benchmark

The post Kubernetes Security Basics appeared first on SogetiLabs.

Whitepaper: Start off the Power Platform journey

Par Rohan Wadiwala

How to initiate your organizations journey in implementing Power platform

Consider a scenario, to improve business processes, communication and overall organizational integrity a company just bought Office 365 licenses for its employees. The IT team understand the same comes with Power platform licenses and it wants to encourage various groups to take advantage of it and start creating low-code no-code apps by themselves. Few months down the line IT starts getting support tickets regarding various apps create by users which have now being used by multiple departments are not functioning as intended. Also, IT is getting multiple requests for upgrading the Power platform plan so that they can use various premium features of the platform.  IT is overwhelmed and due to restricted budget constraints is not able to satisfy all the end users thus creating malcontent.

This is a typical scenario for a company who jumps head-first into a technology platform which has a great pull for end user but without through due-diligence and long-term plan. Power platform (PP) provides many features which when leveraged correctly can be an absolute boon for the company. But some ground-rules needs to be put in place for this endeavour to be successful.

This whitepaper is intended to provide a starting point for implementing a successful power platform (PP) practice into an organization. This paper only concentrates on a few correct steps you can take in the journey; the rest of the journey can be planned according to the success of these first few steps.

Download here.

The post Whitepaper: Start off the Power Platform journey appeared first on SogetiLabs.

How to create positive impact with technology? Foster critical thinking

Par Léon de Bakker

Curating an organization that doesn’t take anything for granted

Do you want to create positive impact with technology? It's really up to you to choose a positive or negative impact.

Reading time: 3 minutes.

Technology is, just like steel or words, neither good nor bad. It is a tool without emotion nor consciousness. Yet, it’s application can be a powerful force for good. But also, technology can destroy value in unforeseeable ways. A conscious approach and critical thinking will help you create positive impact with technology.

This post is part of a series about technology and organizational positive impact

  1. A guide to organizational positive impact, tech edition
  2. Doing good drives profitability
  3. Major events and big statements
  4. Be consistent about your positive impact
  5. A strong purpose proposition requires agility and resilience
  6. Agility and organizational complexity, beware of the present
  7. How to create positive impact with technology? Foster critical thinking
  8. Technology as an accelerator for positive impact
  9. Assess how technology shapes your organizational positive impact
  10. Reframe technology’s purpose and strengthen organizational alignment

How technology helps preserve the rain forests in Indonesia

Technology can help achieve a sustainable way to produce palm oil. Palm oil comes predominantly from Indonesia. Indonesia is also home to rainforests that are the most biodiverse places on Earth. Economically, palm oil is much more lucrative than biodiversity. The effect, I’m cutting some corners here, is illegal and widespread deforestation.

A Dutch university and a Dutch company joined forces. Now they use data from a radar-equipped satellite that monitors the rainforest in Indonesia. The software interprets the data that comes from space and shows, with great detail, where and which type of vegetation is lost and what has come in return. 10 multinationals such as Unilever, PepsiCo, and Nestlé use this information to have an informed conversation with their palm oil suppliers. Which in turn brings closer a more sustainable approach.

Videoconferencing gone bad

Technology is an indispensable tool for organizations to positively impact the planet and society. But be aware of its unintentional side-effects.

At first glance Zoom, a video conferencing application provides a wonderful service. It connects people across the globe. The use of Zoom has exploded due to the isolation that comes with the measures to contain the spread of the coronavirus. The peak-number of daily users went from 10 million at the end of December to 200 million in March. With that increase in users comes an increase in scrutiny. As it turns out, Zoom sends user data to Facebook, wrongly claimed end-to-end encryption, allowed meeting hosts to track attendees, left Mac users vulnerable to having their microphones and webcams hijacked and users experienced uninvited and unwanted attendees, often to shout abuse, share pornography or make racist remarks.

Small incidents and mundane acts

These two examples of sustainable palm oil and Zoom are straightforward. Most people will choose health over destruction. Safety over abuse. If similar cases arise within your organization, I wish for a smooth and simple process leading to a quick resolve.

Your big incidents and your response show your stakeholders the strength of your purpose proposition. They enable you to lead by example and make it possible for others to follow.

Your organization’s small incidents and daily almost mundane acts are equally important. They are hardly noticeable and yet they do add-up. Jointly, those small acts cumulate into interwoven patterns of ways of working. They will create automatic and unquestioned behavior. It will become the way things are done.

A wide lens and critical thinking

Some level of automatic behavior is essential and welcome as long as there’s space to correct course, also in small ways. If you want your organization to be able to deal with shifting contexts and stay aligned to your purpose proposition, then you need to stay conscious in your actions. Apply a wide lens.

To achieve this, foster critical thinking. Encourage an environment that is open to question the ordinary and the exceptional alike. One that embraces constructive and solid feedback and cuts through bias, ignorance, stereotypes, automatic behavior and groupthink.

In my next post, “Technology as an accelerator for positive impact”, more about applying critical thinking and a wide lens to your technology landscape. You’ll find my posts and more at SogetiLabs.


Tips

Defining critical thinking

Critical thinking is an umbrella for underlying skills and approaches. Since there is no commonly agreed upon definition nor framework, you need to define critical thinking in the context of your own organization.

To me, critical thinking is the ability to keep an open and curious mind and to voice oneself in a logical, empathic and independent manner. Critical thinking is especially important when the going gets tough. And yet it’s at those moments of difficulty when barriers to critical thinking are most evident. For instance, due to complexity, ambiguity, sensitivity, time constraints, incomplete or incorrect information, group pressure or possible rewards or punishments.

Curate a critical thinking-friendly environment

The good news is, you can help your organization overcome barriers. The disclaimer to the good news. Critical thinking is essential for a resilient positive impact. So, if you want to use technology for good, define and operationalize critical thinking. For this, curate a critical thinking-friendly environment on an individual, value chain and organizational level to achieve the desired skill set, mindset, group dynamics and policies. Help shape conversations by being clear about the why, objectives, limitations and boundaries.

Critical thinking will be in high demand

You won’t be the only one aiming to improve on critical thinking. According to a report by the World Economic Forum, critical thinking is amongst the skills employers consider most important today and expect to be trending by 2022. See table 3 of the report “Towards a Reskilling Revolution”. So, you may need to step up your game, if you want to stand out from the crowd and attract and keep talent that has mastered this particular skill.

The post How to create positive impact with technology? Foster critical thinking appeared first on SogetiLabs.

What is AWS Lambda?

Par Ankur Jain

AWS Lambda Function

What is AWS Lambda?

AWS Lambda is a serverless compute service. Hence you don’t have to stress over which AWS assets to dispatch, or by what means will they oversee them. Rather, you have to put the code on Lambda, and it runs. In any case, a lambda must be utilized to execute foundation undertakings.

In AWS Lambda the code is executed dependent on the reaction of occasions in AWS services, for example, include/erase records in S3 can, HTTP demand from Amazon API entryway, and so on.
AWS Lambda likewise encourages you to concentrate on your important or core item and business rationale rather than oversees the operating system (OS) control, OS fixing, right-measuring, provisioning, scaling, and so on.

What is AWS Lambda Function?

The AWS::Lambda:: Function asset makes a Lambda function. To make it, you need a deployment package and an execution job. The deployment package contains your function’s code. The execution job gives the authorization to utilize AWS administrations, for example, Amazon CloudWatch Logs for log streaming and AWS X-Ray for request tracing.

Analysis of Lambda Function

To see how to compose a Lambda work, you need to comprehend what goes into one.

Handler:

A Lambda work has a couple of prerequisites. The main prerequisite you have to fulfill is to give a handler. The handler is the passage point for the Lambda. A Lambda function acknowledges JSON-organized information and will, for the most part, return the equivalent.

Runtime Environment:

The subsequent necessity is that you’ll have to indicate the runtime condition for the Lambda. The runtime will typically associate straightforwardly with the language you chose to compose your function.

Trigger:

The last prerequisite is a trigger. You can design a Lambda conjuring because of an occasion, for example, another document transferred to S3, an adjustment in a DynamoDB table, or a comparative AWS occasion. You can likewise design the Lambda to react to solicitations to AWS API Gateway, or dependent on a timer activated by AWS Cloudwatch.  You can even set up Lambda functions to react to occasions produced by Alexa, yet that is well past the extent of this article.

Syntax of Lambda Function:
The following syntax is used to declare it:

Best practices of Lambda Function
Here, are significant prescribed procedures of Lambda functions:

  • Utilize the right “break.”
  • Use the local storage functions which are 500MB in size in temp folder.
  • Limiting the utilization of start-up code which isn’t legitimately identified with handling the recent event.
  • You should utilize worked in CloudWatch checking of your Lambda functions to see and improve demand latencies.

Advantages of utilizing AWS Lambda

AWS Lambda has a couple of remarkable favorable circumstances over keeping up your own servers in the cloud. The principal ones are:
Pay per use. In AWS Lambda, you pay just for the figure your functions use, in addition to any system traffic produced. For remaining tasks at hand that scale altogether as indicated by the time of day, this sort of charging is commonly more practical.
Completely managed infrastructure. Since your functions run on the oversaw AWS foundation, you don’t have to consider the basic servers—AWS deals with this for you. This can bring about huge reserve funds on operational assignments, for example, updating the working framework or dealing with the system layer.
Programmed scaling. AWS Lambda makes the occurrences of your function as they are mentioned. There is no pre-scaled pool, no scale levels to stress over, no settings to tune—and simultaneously your functions are accessible at whatever point the heap increments or diminishes. You just pay for each function’s run time.

The post What is AWS Lambda? appeared first on SogetiLabs.

Agility and organizational complexity, beware of the present

Par Léon de Bakker

Going from A to B is as much about A as it is about B.

Beware of the present if you want to embrace agility. So, don't over-focus on your goals but give space to stop and address the present.

Reading time: < 3 minutes.

The compelling promise of an agile organization understandably shifts focus to a future state of nimbleness. But it’s the present organization that will take you there. So, beware of the present if you want to embrace agility.

Especially since organizational changes have a rather low success rate. In a McKinsey survey, just 26 percent of respondents say the transformations they’re most familiar with have been very or completely successful.

This post is part of a series about technology and organizational positive impact

  1. A guide to organizational positive impact, tech edition
  2. Doing good drives profitability
  3. Major events and big statements
  4. Be consistent about your positive impact
  5. A strong purpose proposition requires agility and resilience
  6. Agility and organizational complexity, beware of the present
  7. How to create positive impact with technology? Foster critical thinking
  8. Technology as an accelerator for positive impact
  9. Assess how technology shapes your organizational positive impact
  10. Reframe technology’s purpose and strengthen organizational alignment

The organization as is

You wish for your organization to embrace a new, more resilient future. To change its course and its way of getting things done. Then you will have to achieve that with the organization as is. With all of its “old” and in part undesired behavior, products and processes. Most of the transformational issues I see, result from too strong a focus on the future. Disregarding the status quo. Not sufficiently taking into account strongly felt and deeply engrained dynamics. I’ll share with you three examples.

Oops

Senior management of a technology organization decided a few months back, that they had to go agile. After deciding on goals, a roadmap and a new organizational model, they secured budget, launched a communication campaign and put together a support team to help with the transition. Quite thorough. All set to go? They forgot to involve their lighthouse teams, suppliers and clients. Even though these stakeholders would be strongly impacted by this change and instrumental in a successful implementation. In large part this omission was the result of the profile of this organization and their management. Great at setting direction and moving into action. Not so great at stakeholder collaboration.

A bank decided to extend the Agile Way of Work beyond the technology department. As it turned out, the other departments were not happy with the transparency that comes with agility. Their support and therefore the transition stopped.

An insurance company introduced an agile model to become nimbler and increase value for investors. Core to agility is to empower teams and delegate responsibilities. Which is something neither management nor teams were used to do. The result was that decision making largely came to a stand-still, which created a disconnect between the operational level and management and seriously hampered execution.

Going agile will magnify your organization’s challenges

Changing your organizational model is exciting and opens up new possibilities. The change will also reveal fears, weaknesses, dynamics, and risks that have been with your organization for some time. Going agile won’t fix that. Quite the contrary. Going agile will magnify your organization’s challenges.

My advice. Use your agile transformation to proactively address organizational challenges. This is vital if you want to succeed in your positive impact journey. Technology can help you on that journey. Or hinder. More about that in my next post: “How to create positive impact with technology? Foster critical thinking”.

You’ll find my posts and more at SogetiLabs.


Tips

Embrace the mindset of perpetual beta

Be conscious, clear and solid about direction, the why. The how is open to change. Maybe metrics don’t work. Maybe technology changed. Or your ambition has to be restated. Maybe you have to go quicker or maybe you were too idealistic. We’re not here to keep inertia and there’s no space for legacy. Be honest to yourself and pivot when needed. Your journey is continuously in flux. Embrace the mindset of being in perpetual beta. Striving continuously to what fits the journey best. There’s no manual to follow. You’ll have to find your own path.

Be inclusive in your execution

Strive for a shared sense of purpose and values and create space for individuality. Establish solid dynamics that are inclusive and supportive. Ensure energy is invested both top down and bottom up. Revisit your structures, programs and metrics. Ensure changes will ripple all throughout your organization and its processes. Set up regular touchpoints to not lose sight of the bigger picture and to tie all individual actions together.

Beware of your agile model

There are different approaches to organizational agility. Some large-scale models demand a strong hierarchy and lots of management, staff and planning. That might not sound very agile but for some bureaucracies it’s quite a leap and for now a perfect fit. Other approaches take out layers of management. Or focus on your value chain. Find the approach that suits your organization this moment and be ready to switch when needed.

Respect natural boundariesd

Tailor your agile approach throughout your organization. Different groups and tasks require a different approach. For instance, the difference in speed, focus and mindset of selling products versus filing invoices. These different tasks require different human profiles resulting in natural boundaries within your organization. These boundaries are likely areas of friction. I notice that frameworks, such as SAFe, try to overcome this friction by extending the agile way of work to a larger part of the organization. I also notice that forcing both sides of a natural boundary to embrace the same agile way of work, is futile.

Take down man-made boundaries

Next to natural boundaries, your organization has created some on their own. Those man-made boundaries are both the result of primarily but not exclusively your organizational structure, governance structure, metrics, leadership development program, innovation model, HR guidelines and remuneration program. This in turn has created a mindset, expected behavior and processes that is at the heart of your organization and most likely do not favor change. So, to successfully become more agile, identify levers to influence those man-made boundaries and liberate your organization from inertia.

The post Agility and organizational complexity, beware of the present appeared first on SogetiLabs.

Is low code replacing traditional development?

Par Peter Rombouts

Spoiler alert; no.

Low code vs Traditional Dev

My colleague, friend and SogetiLabs Fellow Daniel Laskewitz and I frequently talk about this topic. His field of expertise as Microsoft MVP covers the Microsoft PowerPlatform including low code systems like Power Automate (formerly Flow).

All too often people see a division between low code and traditional development using languages like C#, Java, TypeScript and Go.
In the real world however, these systems work together perfectly.

Most of the times, you cannot solve a problem with only low code. Think about scenario’s where you should link to old legacy systems or complex API calls. In those cases low code without any enhancement cannot natively connect to those systems.

Behold custom connectors

In the Microsoft ecosystems, custom connectors allow you to bridge this gap. This way, the low code system can interact with any system you write a connector for. This may be common knowledge, but the fact is that most developers do not see how big this really is.

This means you can link any PowerApp, Microsoft Flow, or LogicApps to your custom connector, and reuse those within your entire organisation.
You could even publicly publish these if you have a service you want to expose. So if you are an ISV, this can help you get more traction on your product.

Bridging the gap

In the end it all comes down to developers of any system and language understanding the capabilities of the platforms they and their companies are using. For low code developers this means sometimes calling in the help of traditional developers. And more importantly, this also means traditional developers should learn that these low code systems can help you simplify (and thus speed up!) your development by using ready-to-roll systems and connectors available to you.

As there are over 325 connectors available, that should really speed making connections up!

Get started!

Want to explore custom connectors? Look at these resources or feel free to contact me or Daniel, we strongly believe bridging this gap between low code and traditional dev is key for succes in the future of development!

The post Is low code replacing traditional development? appeared first on SogetiLabs.

Process Definition Document is a Communication Tool

Par Tuukka Virtanen

Software automation takes time. It takes time to investigate the to-be-automated system, learn how it functions and how it can be programmatically operated. Maybe you have to install some new software to bridge communication between systems, maybe you have to write that yourself. Getting well-versed in the underlying systems and their data pipelines and their toolchain management can be extremely time-consuming, even if you don’t have to know everything about their inner workings.

But after that, automation is a breeze. When you have your library of automation keywords done, automation should be as simple as giving instructions. Of course, you will probably have to write some more keywords as the automation progresses and you encounter some edge cases. And giving instruction might not be as simple as it looks. So, when your library of automation keywords covers all the basic operations of the system, you can start eyeing the real goal – how to describe the process as a series of automation keyword steps.

Translating the process into a series of automation steps might sound like a simple task, but in practice, it can be complicated. Failed automation attempts happen when the developer misunderstands the process requirements. Maybe the automation process failed because the automation didn’t know what to do when an external service stopped responding to its queries? Because the developer didn’t think about that and so, didn’t program the automation steps required.

But what was the real reason for the failure? There never were any process requirements presented to the developer. Because there was no process definition document (PDD).

The process definition document describes the needed automation steps in detail.  It is a communication tool for sharing the same vision of the automation process. The level of detail should be such that there is only one way the automation step can be understood. A bad example would be: “Moderator deletes a post”. A good example could be: “Moderator navigates to post detail page. The moderator clicks the ‘Delete post’ button. A popup opens asking ‘Are you sure you want to delete post?’ with buttons ‘Yes’ and ‘No’. The moderator clicks ‘Yes’. The post is deleted and is not visible in the index.” The less there is ambiguity the better.

Different program paths should also be documented. What happens when a user clicks ‘Next’ but the user detail form is only half filled? Should there be a popup saying, ‘Please fill this information’? Visualize the program paths with a flow chart. Try to think about the happy path and then diverge from it. What can go wrong? You can divide possible exceptions into two categories: logic exceptions and system exceptions. Logic exceptions are errors in program logic, for example, off-by-one errors. System exceptions happen when the system has failed or is out of reach, for example, in the case of a network connection failure. The automation must be prepared for system exceptions and have pathways around them, in order to continue the automation process. Logic exceptions must be fixed, system exceptions prepare for.

Having a process definition document makes work delegation easier among the team members. The business analyst could be responsible for translating business requirements into the process definition document that is then handed over to the automation developer. The automation developer gets to concentrate on the technical perspective and the business analyst gets to concentrate on the business perspective.

The post Process Definition Document is a Communication Tool appeared first on SogetiLabs.

❌