Tuesday, August 15, 2017

Notes on SRP from Agile Principles, Practices and Patterns book

I think that if you rely only on talks, community events, tweets and posts to learn about a concept, you can sometimes end up with diluted (or even completely wrong) versions of the concept due to broken telephone game effects. For this reason, I think it's important to try instead to get closer to the sources of the concepts you want to learn.

Lately I've been doing some study on object-oriented concepts doing an effort to get closer to the sources. These are the resulting notes on Single Responsibility Principle I've taken from the chapter devoted to it in Robert C. Martin's wonderful Agile Principles, Practices and Patterns in C# book:

  • "This principle was described in the work of [Larry Constantine, Ed Yourdon,] Tom DeMarco and Meilir Page-Jones. They called it cohesion, which they defined as the functional relatedness of the elements of a module" <- [!!!]
  • "... we modify that meaning a bit and relate cohesion to the forces that cause a module, or a class, to change"
  • [SRP definition] -> "A class should have only one reason to change"
  • "Why was important to separate [...] responsibilities [...]? The reason is that each responsibility is an axis of change" <- [related with Mateu Adsuara's complexity dimensions]
  • "If a class has more than one responsibility the responsibilities become coupled" <- [related with Long Method, Large Class, etc.] <- [It also eliminates the possibility of using composition at every level (functions, classes, modules, etc.)] "Changes to one responsibility may impair or inhibit the class ability to meet the others. This kind of coupling leads to fragile designs" <- [For R. C. Martin, fragility is a design smell, a design is fragile when it's easy to break]
  • [Defining what responsibility means]
    • "In the context of the SRP, we define a responsibility to be a reason for change"
    • "If you can think of more than one motive for changing a class, that class has more than one responsibility. This is sometimes difficult to see"
  • "Should [...] responsibilities be separated? That depends on how the application is changing. If the application is not changing in ways that cause the [...] responsibilities to change at different times, there is no need to separate them." <- [applying Beck's Rate of Change principle from Implementation Patterns] "Indeed separating them would smell of needless complexity" <- [Needless Complexity is a design smell for R. C. Martin. It's equivalent to Speculative Generality from Refactoring book]
  • "An axis of change is an axis of change only if the changes occur" <- [relate with Speculative Generality and Yagni] "It's not wise to apply SRP, or any other principle if there's no symptom" <- [I think this applies at class and module level, but it's still worth it to always try to apply SRP at method level, as a responsibility identification and learning process]
  • "There are often reasons, having to do with the details of hardware and the OS [example with a Modem implementing two interfaces DateChannel and Connection], that force us to couple things that we'd rather not couple. However by separating their interfaces, we [...] decouple[..] the concepts as far as the rest of the application is concerned" <- [Great example of using ISP and DIP to hide complexity to the clients] "We may view [Modem] as a kludge, however, note that all dependencies flow away from it." <- [thanks to DIP] "Nobody needs to depend on this class [Modem]. Nobody except main needs to know it exists" <- [main is the entry point where the application is configured using dependency injection] "Thus we've put the ugly bit behind a fence. It's ugliness need not leak out and pollute the rest of the app"
  • "SRP is one of the simplest of the principles but one of the most difficult to get right"
  • "Conjoining responsibilities is something that we do naturally"
  • "Finding and separating those responsibilities is much of what software design is really about. Indeed the rest of the principles we discuss come back to this issue in one way or another"
Agile Principles, Practices and Patterns in C# is a great book that I recommend to read. For me getting closer to the sources of SOLID principles has been a great experience that has helped me to remove illusions of knowledge I had developed due to the telephone game effect of having learned it through blogs and talks.

Monday, July 31, 2017

Two examples of Connascence of Position

This post appeared originally on Codesai’s Blog.

As we saw in our previous post about connascence, Connascence of Position (CoP) happens when multiple components must be adjacent or appear in a particular order. CoP is the strongest form of static connascence, as shown in the following figure.
Connascence forms sorted by descending strength (from Kevin Rutherford's XP Surgery).
A typical example of CoP appears when we use positional parameters in a method signature because any change in the order of the parameters will force to change all the clients using the method.

The degree of the CoP increases with the number of parameters, being zero when we have only one parameter. This is closely related with the Long Parameters List smell.
In some languages, such as Ruby, Clojure, C#, Python, etc, this can be refactored by introducing named parameters (see Introduce Named Parameter refactoring)[1].

Now changing the order of parameters in the signature of the method won’t force the calls to the method to change, but changing the name of the parameters will. This means that the resulting method no longer presents CoP. Instead, now it presents Connascence of Name, (CoN), which is the weakest form of static connascence, so this refactoring has reduced the overall connascence.

The benefits don’t end there. If we have a look at the calls before and after the refactoring, we can see how the call after introducing named parameters communicates the intent of each parameter much better. Does this mean that we should use named parameters everywhere?

Well, it depends. There’re some trade-offs to consider. Positional parameters produce shorter calls. Using named parameters gives us better code clarity and maintainability than positional parameters, but we lose terseness[2]. On the other hand, when the number of parameters is small, a well chosen method name can make the intent of the positional arguments easy to guess and thus make the use of named parameters redundant.

We should also consider the impact that the degree and locality of each instance of CoP[3] can have on the maintainability and communication of intent of each option. On one hand, the impact on maintainability of using positional parameters is higher for public methods than for private methods (even higher for published public methods)[4]. On the other hand, a similar reasoning might be made about the intent of positional parameters: the positional parameters of a private method in a cohesive class might be much easier to understand than the parameters of a public method of a class a client is using, because in the former case we have much more context to help us understand.

The communication of positional parameters can be improved a lot with the parameter name hinting feature provided by IDEs like IntelliJ. In any case, even though they look like named parameters, they still are positional parameters and have CoP. In this sense, parameter name hinting might end up having a bad effect in your code by reducing the pain of having long parameter lists.

Finally, moving to named parameters can increase the difficulty of applying the most frequent refactoring: renaming. Most IDEs are great renaming positional parameters, but not all are so good renaming named parameters.

A second example.

There are also cases in which blindly using named parameters can make things worse. See the following example:

The activate_alarm method presents CoP, so let’s introduce named parameters as in the previous example:

We have eliminated the CoP and now there’s only CoN, right?

In this particular case, the answer would be no. We’re just masking the real problem which was a Connascence of Meaning (CoM) (a.k.a. Connascence of Convention). CoM happens when multiple components must agree on the meaning of specific values[5]. CoM is telling us that there might be a missing concept or abstraction in our domain. The fact that the lower_threshold and higher_threshold only make sense when they go together, (we’re facing a case of data clump), is an implicit meaning or convention on which different methods sharing those parameters must agree, therefore there’s CoM.

We can eliminate the CoM by introducing a new class, Range, to wrap the data clump and reify the missing concept in our domain reducing the CoM to Connascence of Type (CoT)[6]. This refactoring plus the introduction of named parameters leaves with the following code:

This refactoring is way better than only introducing named parameters because it does not only provides a bigger coupling reduction by going down in the scale from from CoP to CoT instead of only from CoP to CoM, but also it introduces more semantics by adding a missing concept (the Range object).

Later we’ll probably detect similarities[7] in the way some functions that receives the new concept are using it and reduce it by moving that behavior into the new concept converting it in a value object. It’s in this sense that we say that value objects attract behavior.

Summary.

We have presented two examples of CoP, a “pure” one and another one that was really hiding a case of CoM. We have related CoP and CoM with known code smells, (Long Parameters List, Data Clump and Primitive Obsession), and introduced refactorings to reduce their coupling and improve their communication of intent. We have also discussed a bit, about when and what we need to consider before applying these refactorings.

References.

Talks.

Books.

Posts.

Footnotes.

:
[1] For languages that don't allow named parameters, see the Introduce Parameter Object refactoring.
[3] See our previous post About Connascence.
[4] For instance, Sandi Metz recommends in her POODR book to "use hashes for initialization arguments" in constructors (this was the way of having named parameters before Ruby 2.0 introduced keyword arguments).
[5] Data Clump and Primitive Obsession smells are examples of CoM.
[6] Connascence of Type, (CoT), happens when multiple components must agree on the type of an entity.
[7] Those similarities in the use of the new concept are examples of Conascence of Algorithm which happens when multiple components must agree on a particular algorithm.

Sunday, July 2, 2017

Kata: LegacySecurityManager in Java

This week I did the LegacySecurityManager kata in Java.

This is the original version of the code (ported from C# to Java):
As you can see, createUser is a really hard to test static function which has too many responsibilities.

This is the final version of createUser after a "bit" of refactoring:
which is using the CreatingUser class:
The before and after code is not so interesting as how we got there.

I kept the legacy code interface and tried to use as much as possible only automatic refactorings (mainly Replace Method with Method Object and Extract Method) to apply the extract and override dependency-breaking technique, from Michael Feather's Working Effectively with Legacy Code book, which enabled me to write tests for the code.

Then, with the tests in place, it was a matter of identifying and separating responsibilities and introducing some value objects. This separation allowed us to remove the scaffolding produced by the extract and override technique producing much simpler and easier to understand tests.

You can follow the process seeing all the commits (I committed changes after every refactoring step). There you'll be able to see there not only the process but my hesitations, mistakes and changes of mind as I learn more about the code during the process.

You can also find all the code on GitHub.

After reflecting on what I did I realized that I could have done less to get the same results by avoiding some tests that I later found out where redundant and deleted. I also need to improve my knowledge of IntelliJ automatic refactorings to improve my execution (that part you can't see in the commits).

All in all is a great kata to practice your refactoring skills.

Interesting Talk: "Functional architecture. The pits of success"

I've just watched this great talk by Mark Seemann

Friday, June 23, 2017

Testing Om components with cljs-react-test

I'm working for Green Power Monitor which is a company based in Barcelona specialized in monitoring renewable energy power plants and with clients all over the world.

We're developing a new application to monitor and manage renewable energy portfolios. I'm part of the front-end team. We're working on a challenging SPA that includes a large quantity of data visualization and which should present that data in an UI that is polished and easy to look at. We are using ClojureScript with Om (a ClojureScript interface to React) which are helping us be very productive.

I’d like to show an example in which we are testing an Om component that is used to select a command from several options, such as, loading stored filtering and grouping criteria for alerts (views), saving the current view, deleting an already saved view or going back to the default view.

This control will send a different message through a core.async channel depending on the selected command. This is the behavior we are going to test in this example, that the right message is sent through the channel for each selected command. We try to write all our components following this guideline of communicating with the rest of the application by sending data through core.async channels. Using channels makes testing much easier because the control doesn’t know anything about its context

We’re using cljs-react-test to test these Om components as a black box. cljs-react-test is a ClojureScript wrapper around Reacts Test Utilities which provides functions that allow us to mount and unmount controls in test fixtures, and interact with controls simulating events.

This is the code of the test:

We start by creating a var where we’ll put a DOM object that will act as container for our application, c.

We use a fixture function that creates this container before each test and tears down React's rendering tree, after each test. Notice that the fixture uses the async macro so it can be used for asynchronous tests. If your tests are not asynchronous, use the simpler fixture example that appears in cljs-react-test documentation.

All the tests follow this structure:

  1. Setting up the initial state in an atom, app-state. This atom contains the data that will be passed to the control.
  2. Mounting the Om root on the container. Notice that the combobox is already expanded to save a click.
  3. Declaring what we expect to receive from the commands-channel using expect-async-message.
  4. Finally, selecting the option we want from the combobox, and clicking on it.

expect-async-message is one of several functions we’re using to assert what to receive through core.async channels:

The good thing about this kind of black-box tests is that they interact with controls as the user would do it, so the tests know nearly nothing about how the control is implemented.

Interesting Webcast: "CS Education Zoo interviews David Nolen"

I've just watched this great CS Education Zoo #5 webcast with David Nolen

Saturday, June 17, 2017

Course: Introduction to CSS3 on Coursera

My current client is Green Power Monitor (GPM) which is a company based in Barcelona specialized in monitoring renewable energy power plants and with clients all over the world.

I'm part of a team that is developing a new application to monitor and manage renewable energy portfolios. We use C# and F# in the back-end and ClojureScript in the front-end.

I'm in the front-end team. We're developing a challenging SPA with lots of data visualization which has to look really good.

We're taking advantage of Om (a ClojureScript interface to React), core.async and ClojureScript to be more productive.

In other teams I've been before, there were different people doing HTML & CSS and programming JavaScript. That's not the case in GPM. We are responsible not only of programming but also of all the styling of the application. We are using SASS.

For me, not having done CSS before, it was a big challenge. I dreaded every time I had to style a new Om control I had programmed. My colleagues Jordi and Andre have helped me a lot (thanks guys!). However I wanted to get more productive to be more independent and use less of their time, so I decided to do a CSS3 course.

I did the Introduction to CSS3 course from the University of Michigan. I learned how to use CSS3 to style pages focusing on both proper syntax and the importance of accessibility design. I really liked Colleen van Lent's classes and how she encourages to experiment and make messes to learn. Thanks Collen!

After the course I'm starting to be able to style my controls with less trial and error and having to ask less doubts to Jordi and Andre.

Learning CSS3 and doing all the styling myself is helping me to become a bit more rounded as a front-end developer.

Sunday, June 4, 2017

Kata: Luhn Test in Clojure

We recently did the Luhn Test kata at a Barcelona Software Craftsmanship event.

This is a very interesting problem to practice TDD because it isn't so obvious how to test drive a solution through the only function in its public API: valid?.

What we observed during the dojo is that, since the implementation of the Luhn Test is described in so much detail in terms of the s1 and s2 functions (check its description here), it was very tempting for participants to test these two private functions instead of the valid? function.

Another variant of that approach consisted in making those functions public in a different module or class, to avoid feeling "guilty" for testing private functions. Even though in this case, only public functions were tested, these approach produced a solution which has much more elements than needed, i.e. with a poorer design according to the 4 rules of simple design. It also produced tests that are very coupled to implementation details.

In a language with a good REPL, a better and faster approach might have been writing a failing test for the valid? function, and then interactively develop with the REPL s1 and s2 functions. Then combining s1 and s2 would have made the failing test for valid? pass. At the end, we could add some other tests for valid? to gain confidence in the solution.

This mostly REPL-driven approach is fast and produces tests that don't know anything about the implementation details of valid?. However, we need to realize that, it follows the same technique of "testing" (although only interactively) private functions. The huge improvement is that we don't keep these tests and we don't create more elements than needed. However, the weakness of this approach is that it leaves us with less protection against possible regressions. That's why we need to complement it with some tests after the code is written to gain confidence in the solution.

If we use TDD writing tests only for the valid? function, we can avoid creating superfluous elements and create a good protection against regressions at the same time. We only need to choose our test examples wisely.

These are the tests I used to test drive a solution (using Midje):

Notice that I actually needed 7 tests to drive the solution. The last four tests were added to gain confidence in it.

This is the resulting code:

See all the commits here if you want to follow the process. You can find all the code on GitHub.

We can improve this regression test suit by changing some of the tests to make them fail for different reasons:

I think this kata is very interesting for practicing TDD, in particular, to learn how to choose good examples for your tests.

Friday, April 28, 2017

Books I read (January - April 2017)

January
- Vencidos pero vivos (Vaincus mais vivants), Maximilien Leroy and Loïc Locatelli Kournwsky
- El Boxeador: la verdadera historia de Hertzko Haft (Der Boxer: Die Überlebensgeschichte des Hertzko Haft), Reinhard Kleist
- Vuelo a casa y otras historias (Flying Home and Other Stories), Ralph Ellison
- Rendezvous with Rama, Arthur C. Clarke
- Treasure Island, Robert Louis Stevenson
- Agile Principles, Patterns, and Practices in C#, Robert C. Martin
- Atonement, Ian McEwan
- En una piel de león (In the Skin of a Lion), Michael Ondaatje

February
- El gigante enterrado, Kazuo Ishiguro
- The Time Machine, H.G. Wells

March
- JavaScript Patterns: Build Better Applications with Coding and Design Patterns, Stoyan Stefanov
- Slack: Getting Past Burnout, Busywork, and the Myth of Total Efficiency, Tom DeMarco
- El minotauro global (The Global Minotaur: America, the True Origins of the Financial Crisis and the Future of the World Economy 2nd edition), Yanis Varoufakis
- La sonrisa etrusca, José Luis Sampedro

April
- Howards End, E. M. Forster
- A Wizard of Earthsea, Ursula K. Le Guin
- Las tumbas de Atuan (The Tombs of Atuan), Ursula K. Le Guin
- En tierras bajas (Niederungen), Herta Müller
- How to Love, Thich Nhat Hanh
- La costa más lejana (The Farthest Shore), Ursula K. Le Guin
- Tehanu, Ursula K. Le Guin
- Obliquity: Why our goals are best achieved indirectly, John Kay
- Tales from Earthsea, Ursula K. Le Guin
- The Other Wind, Ursula K. Le Guin
- Metaphors We Live By, George Lakoff and Mark Johnson

Thursday, February 9, 2017

Recorded talk about sequence comprehensions in Clojure (in Spanish)

Today we did our fourth remote talk about Clojure as part of our small Clojure/ClojureScript study group.

This time we talked about sequence comprehensions and we recorded it as well.

This is the resulting video:

You'll find the examples we used here.

Again I'd like to especially thank Ángel for his help and patience.

I hope you'll find it useful.

Thursday, January 26, 2017

About Connascence

This post appeared originally on Codesai’s Blog.

Lately at Codesai we’ve been studying and applying the concept of connascence in our code and even have done an introductory talk about it. We’d like this post to be the first of a series of posts about connascence.

 

1. Origin.

The concept of connascence is not new at all. Meilir Page-Jones introduced it in 1992 in his paper Comparing Techniques by Means of Encapsulation and Connascence. Later, he elaborated more on the idea of connascence in his What every programmer should know about object-oriented design book from 1995, and its more modern version (same book but using UML) Fundamentals of Object-Oriented Design in UML from 1999.
Ten years later, Jim Weirich, brought connascence back from oblivion in a series of talks: Grand Unified Theory of Software Design, The Building Blocks of Modularity and Connascence Examined. As we’ll see later in this post, he did not only bring connascence back to live, but also improved its exposition.
More recently, Kevin Rutherford, wrote a very interesting series of posts, in which he talked about using connascence as a guide to choose the most effective refactorings and about how connascence can be a more objective and useful tool than code smells to identify design problems[1].

 

2. What is connascence?

The concept of connascence appeared in a time, early nineties, when OO was starting its path to become the dominant programming paradigm, as a general way to evaluate design decisions in an OO design. In the previous dominant paradigm, structured programming, fan-out, coupling and cohesion were fundamental design criteria used to evaluate design decisions. To make clear what Page-Jones understood by these terms, let’s see the definitions he used:
Fan-out is a measure of the number of references to other procedures by lines of code within a given procedure.
Coupling is a measure of the number and strength of connections between procedures.
Cohesion is a measure of the “single-mindedness” of the lines of code within a given procedure in meeting the purpose of that procedure.
According to Page-Jones, these design criteria govern the interactions between the levels of encapsulation that are present in structured programming: level-1 encapsulation (the subroutine) and level-0 (lines of code), as can be seen in the following table from Fundamentals of Object-Oriented Design in UML.

Encapsulation levels and design criteria in structured programming.

However, OO introduces at least level-2 encapsulation, (the class), which encapsulates level-1 constructs (methods) together with attributes. This introduces many new interdependencies among encapsulation levels, which will require new design criteria to be defined, (see the following table from Fundamentals of Object-Oriented Design in UML).

Encapsulation levels and design criteria in OO.

Two of these new design criteria are class cohesion and class coupling, which are analogue to the structured programing’s procedure cohesion and procedure coupling, but, as you can see, there are other ones in the table for which there isn’t even a name.
Connascence is meant to be a deeper criterion behind all of them and, as such, it is a general way to evaluate design decisions in an OO design. This is the formal definition of connascence by Page-Jones:
Connascence between two software elements A and B means either
  1. that you can postulate some change to A that would require B to be changed (or at least carefully checked) in order to preserve overall correctness, or
  2. that you can postulate some change that would require both A and B to be changed together in order to preserve overall correctness.
In other words, there is connascence between two software elements when they must change together in order for the software to keep working correctly.
We can see how this new design criteria can be used for any of the interdependencies among encapsulation levels present in OO. Moreover, it can also be used for higher levels of encapsulation (packages, modules, components, bounded contexts, etc). In fact, according to Page-Jones, connascence is applicable to any design paradigm with partitioning, encapsulation and visibility rules[2].

 

3. Forms of connascence.

Page-Jones distinguishes several forms (or types) of connascence.
Connascence can be static, when it can be assessed from the lexical structure of the code, or dynamic, when it depends on the execution patterns of the code at run-time.
There are several types of static connascence:
  • Connascence of Name (CoN): when multiple components must agree on the name of an entity.
  • Connascence of Type (CoT): when multiple components must agree on the type of an entity.
  • Connascence of Meaning (CoM): when multiple components must agree on the meaning of specific values.
  • Connascence of Position (CoP): when multiple components must agree on the order of values.
  • Connascence of Algorithm (CoA): when multiple components must agree on a particular algorithm.
There are also several types of dynamic connascence:
  • Connascence of Execution (order) (CoE): when the order of execution of multiple components is important.
  • Connascence of Timing (CoTm): when the timing of the execution of multiple components is important.
  • Connascence of Value (CoV): when there are constraints on the possible values some shared elements can take. It’s usually related to invariants.
  • Connascence of Identity (CoI): when multiple components must reference the entity.
Another important form of connascence is contranascence which exists when elements are required to differ from each other (e.g., have different name in the same namespace or be in different namespaces, etc). Contranascence may also be either static or a dynamic.

 

4. Properties of connascence.

Page-Jones talks about two important properties of connascence that help measure its impact on maintanability:
  • Degree of explicitness: the more explicit a connascence form is, the weaker it is.
  • Locality: connascence across encapsulation boundaries is much worse than connascence between elements inside the same encapsulation boundary.
A nice way to reformulate this is using what it’s called the three axes of connascence[3]:

4.1. Degree.

The degree of an instance of connascence is related to the size of its impact. For instance, a software element that is connascent with hundreds of elements is likely to become a larger problem than one that is connascent to only a few.

4.2 Locality.

The locality of an instance of connascence talks about how close the two software elements are to each other. Elements that are close together (in the same encapsulation boundary) should typically present more, and higher forms of connascence than elements that are far apart (in different encapsulation boundaries). In other words, as the distance between software elements increases, the forms of connascence should be weaker.

4.3 Stregth.

Page-Jones states that connascence has a spectrum of explicitness. The more implicit a form of connascence is, the more time consuming and costly it is to detect. Also a stronger form of connascence is usually harder to refactor. Following this reasoning, we have that stronger forms of connascence are harder to detect and/or refactor. This is why static forms of connascence are weaker (easier to detect) than the dynamic ones, or, for example, why CoN is much weaker (easier to refactor) than CoP.
The following figure by Kevin Rutherford shows the different forms of connascence we saw before, but sorted by descending strength.

 Connascence forms sorted by descending strength (from Kevin Rutherford's XP Surgery).

 

5. Connascence, design principles and refactoring.

Connascence is simpler than other design principles, such as, the SOLID principles, Law of Demeter, etc. In fact, it can be used to see those principles in a different light, as they can be seen using more fundamental principles like the ones in the first chapter of Kent Beck’s Implementation Patterns book.
We use code smells, which are a collection of code quality antipatterns, to guide our refactorings and improve our design, but, according to Kevin Rutherford, they are not the ideal tool for this task[4]. Sometimes connascence might be a better metric to reason about coupling than the somewhat fuzzy concept of code smells.
Connascence gives us a more precise vocabulary to talk and reason about coupling and cohesion[5], and thus helps us to better judge our designs in terms of coupling and cohesion, and decide how to improve them. In words of Gregory Brown, “this allows us to be much more specific about the problems we’re dealing with, which makes it it easier to reason about the types of refactorings that can be used to weaken the connascence between components”.
It provides a classification of forms of coupling in a system, and even better, a scale of the relative strength of the coupling each form of connascence generates. It’s precisely that scale of relative strengths what makes connascence a much better guide for refactoring. As Kevin Rutherford says:
"because it classifies the relative strength of that coupling, connascence can be used as a tool to help prioritize what should be refactored first"
Connascence explains why doing a given refactoring is a good idea.

 

6. How should we apply connascence?

Page-Jones offers three guidelines for using connascence to improve systems maintanability:
  1. Minimize overall connascence by breaking the system into encapsulated elements.
  2. Minimize any remaining connascence that crosses encapsulation boundaries.
  3. Maximize the connascence within encapsulation boundaries.
According to Kevin Rutherford, the first two points conforms what he calls the Page-Jones refactoring algorithm[6].
These guidelines generalize the structured design ideals of low coupling and high cohesion and is applicable to OO, or, as it was said before, to any other paradigm with partitioning, encapsulation and visibility rules.
They might still be a little subjective, so some of us, prefer a more concrete way to apply connascence using, Jim Weirich’s two principles or rules:
  • Rule of Degree[7]: Convert strong forms of connascence into weaker forms of connascence.
  • Rule of Locality: As the distance between software elements increases, use weaker forms of connascence.

 

7. What’s next?

In future posts, we’ll see examples of concrete forms of conasscence relating them with design principles, code smells, and refactorings that might improve the design.

Footnotes:
[1] See Kevin Rutherford's great post The problem with code smells.
[2] This explains the titles Jim Weirich chose for his talks: Grand Unified Theory of Software Design and The Building Blocks of Modularity.
[4] Again see Kevin Rutherford's great post The problem with code smells.
[5] The concepts of coupling and cohesion can be hard to grasp, just see this debate about them Understanding Coupling and Cohesion hangout.
[6] See Kevin Rutherford's post The Page-Jones refactoring algorithm.
[7] Even though he used the word degree, he was actually talking about strength.

 

References.

Books.

Papers.

Talks.

Posts.

Others.