Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Manage Analyses Form re-applies partitioned Analyses back to the Root Sample #1505

Merged
merged 24 commits into from
Jan 17, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
24 commits
Select commit Hold shift + click to select a range
8ac1da0
Added cleanup migration step
ramonski Jan 15, 2020
5c6dd12
Bypass permission check in migration
ramonski Jan 15, 2020
20b7a40
Consider ancestors and partitions when adding analyses to a Sample
xispa Jan 15, 2020
013dd6c
Take analyses from partitions when removing analyses from a Sample
xispa Jan 15, 2020
500ae96
Ensure submitted analyses from partitions cannot be removed
xispa Jan 15, 2020
e2ad454
Cleanup imports
xispa Jan 15, 2020
1fbafd5
Only return the analyses that have been created
xispa Jan 15, 2020
a4e9a46
Fix test
xispa Jan 15, 2020
7723fcc
Remove obsolete comment
xispa Jan 16, 2020
fa5844a
New doctest for functions from ARAnalysesField
xispa Jan 16, 2020
3d9d90e
Test setter of ARAnalysesField with partition
xispa Jan 16, 2020
8a1fefc
Getter cleanup
xispa Jan 16, 2020
a986a29
Updated migration step
ramonski Jan 16, 2020
755cc00
Merge branch 'fix-cascaded-root-analyses' of git://github.com/senaite…
ramonski Jan 16, 2020
155bdb1
Remove masking in ZCatalog's monkey
xispa Jan 16, 2020
07103f1
Merge branch 'fix-cascaded-root-analyses' of github.com:senaite/senai…
xispa Jan 16, 2020
9c4e1b8
Skip rejected or retracted analyses from partitions
ramonski Jan 16, 2020
5fcd853
Also igonre root analyses that are in invalid state
ramonski Jan 16, 2020
0ac1a6d
Improved migration step
ramonski Jan 16, 2020
0f1c049
Add a link to partition in analysis listing if necessary
xispa Jan 16, 2020
dde36ca
Handle None items gracefully
xispa Jan 16, 2020
9d2b839
Changelog
xispa Jan 16, 2020
f1d2869
Fetch transitions in manage analyses view
ramonski Jan 17, 2020
2478cdf
Changelog
xispa Jan 17, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions CHANGES.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ Changelog

**Added**

- #1505 Display partition link in analyses listing
- #1491 Enable Audit-logging for Dexterity Contents
- #1489 Support Multiple Catalogs for Dexterity Contents
- #1481 Filter Templates field when Sample Type is selected in Sample Add form
Expand All @@ -30,6 +31,7 @@ Changelog

**Fixed**

- #1505 Manage Analyses Form re-applies partitioned Analyses back to the Root
- #1503 Avoid duplicate CSS IDs in multi-column Add form
- #1501 Fix Attribute Error in Reference Sample Popup
- #1493 jsonapi.read omits `include_methods` when a single parameter is used
Expand Down
16 changes: 16 additions & 0 deletions bika/lims/browser/analyses/view.py
Original file line number Diff line number Diff line change
Expand Up @@ -563,6 +563,8 @@ def folderitem(self, obj, item, index):
self._folder_item_detection_limits(obj, item)
# Fill Specifications
self._folder_item_specifications(obj, item)
# Fill Partition
self._folder_item_partition(obj, item)
# Fill Due Date and icon if late/overdue
self._folder_item_duedate(obj, item)
# Fill verification criteria
Expand Down Expand Up @@ -1172,6 +1174,20 @@ def _folder_item_accredited_icon(self, analysis_brain, item):
img = get_image("accredited.png", title=t(_("Accredited")))
self._append_html_element(item, "Service", img)

def _folder_item_partition(self, analysis_brain, item):
"""Adds an anchor to the partition if the current analysis is from a
partition that does not match with the current context
"""
if not IAnalysisRequest.providedBy(self.context):
return

sample_id = analysis_brain.getRequestID
if sample_id != api.get_id(self.context):
part_url = analysis_brain.getRequestURL
url = get_link(part_url, value=sample_id, **{"class": "small"})
title = item["replace"].get("Service") or item["Service"]
item["replace"]["Service"] = "{}<br/>{}".format(title, url)

def _folder_item_report_visibility(self, analysis_brain, item):
"""Set if the hidden field can be edited (enabled/disabled)

Expand Down
1 change: 0 additions & 1 deletion bika/lims/browser/analysisrequest/manage_analyses.py
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,6 @@ def __init__(self, context, request):
self.show_select_all_checkbox = False
self.pagesize = 999999
self.show_search = True
self.fetch_transitions_on_select = False

self.categories = []
self.selected = []
Expand Down
202 changes: 141 additions & 61 deletions bika/lims/browser/fields/aranalysesfield.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,21 +22,21 @@

from AccessControl import ClassSecurityInfo
from AccessControl import Unauthorized
from Products.Archetypes.Registry import registerField
from Products.Archetypes.public import Field
from Products.Archetypes.public import ObjectField
from zope.interface import implements

from bika.lims import api
from bika.lims import logger
from bika.lims.api.security import check_permission
from bika.lims.catalog import CATALOG_ANALYSIS_LISTING
from bika.lims.interfaces import IAnalysis, ISubmitted
from bika.lims.interfaces import IAnalysisService
from bika.lims.interfaces import IARAnalysesField
from bika.lims.interfaces import IAnalysis
from bika.lims.interfaces import IAnalysisService
from bika.lims.interfaces import ISubmitted
from bika.lims.permissions import AddAnalysis
from bika.lims.utils.analysis import create_analysis
from Products.Archetypes.public import Field
from Products.Archetypes.public import ObjectField
from Products.Archetypes.Registry import registerField
from Products.Archetypes.utils import shasattr
from Products.CMFCore.utils import getToolByName
from zope.interface import implements

"""Field to manage Analyses on ARs

Expand Down Expand Up @@ -72,16 +72,21 @@ def get(self, instance, **kwargs):
:param kwargs: Keyword arguments to inject in the search query
:returns: A list of Analysis Objects/Catalog Brains
"""
catalog = getToolByName(instance, CATALOG_ANALYSIS_LISTING)
query = dict(
[(k, v) for k, v in kwargs.items() if k in catalog.indexes()])
query["portal_type"] = "Analysis"
query["getRequestUID"] = api.get_uid(instance)
analyses = catalog(query)
if not kwargs.get("full_objects", False):
return analyses
# Do we need to return objects or brains
full_objects = kwargs.get("full_objects", False)

# Bail out parameters from kwargs that don't match with indexes
catalog = api.get_tool(CATALOG_ANALYSIS_LISTING)
indexes = catalog.indexes()
query = dict([(k, v) for k, v in kwargs.items() if k in indexes])

return map(api.get_object, analyses)
# Do the search against the catalog
query["portal_type"] = "Analysis"
query["getAncestorsUIDs"] = api.get_uid(instance)
brains = catalog(query)
if full_objects:
return map(api.get_object, brains)
return brains

security.declarePrivate('set')

Expand All @@ -99,22 +104,8 @@ def set(self, instance, items, prices=None, specs=None, hidden=None, **kw):
:type hidden: list
:returns: list of new assigned Analyses
"""
# This setter returns a list of new set Analyses
new_analyses = []

# Current assigned analyses
analyses = instance.objectValues("Analysis")

# Submitted analyses must be retained
submitted = filter(lambda an: ISubmitted.providedBy(an), analyses)

# Prevent removing all analyses
#
# N.B.: Submitted analyses are rendered disabled in the HTML form.
# Therefore, their UIDs are not included in the submitted UIDs.
if not items and not submitted:
logger.warn("Not allowed to remove all Analyses from AR.")
return new_analyses
if items is None:
items = []

# Bail out if the items is not a list type
if not isinstance(items, (list, tuple)):
Expand Down Expand Up @@ -156,33 +147,22 @@ def set(self, instance, items, prices=None, specs=None, hidden=None, **kw):
if prices is None:
prices = dict()

# CREATE/MODIFY ANALYSES
# Add analyses
new_analyses = map(lambda service:
self.add_analysis(instance, service, prices, hidden),
services)
new_analyses = filter(None, new_analyses)

for service in services:
service_uid = api.get_uid(service)
keyword = service.getKeyword()

# Create the Analysis if it doesn't exist
if shasattr(instance, keyword):
analysis = instance._getOb(keyword)
else:
analysis = create_analysis(instance, service)
new_analyses.append(analysis)

# set the hidden status
analysis.setHidden(hidden.get(service_uid, False))

# Set the price of the Analysis
analysis.setPrice(prices.get(service_uid, service.getPrice()))

# DELETE ANALYSES
# Remove analyses
# Since Manage Analyses view displays the analyses from partitions, we
# also need to take them into consideration here. Analyses from
# ancestors can be omitted.
analyses = instance.objectValues("Analysis")
analyses.extend(self.get_analyses_from_descendants(instance))

# Service UIDs
service_uids = map(api.get_uid, services)

# Analyses IDs to delete
delete_ids = []

# Assigned Attachments
assigned_attachments = []

Expand All @@ -194,7 +174,7 @@ def set(self, instance, items, prices=None, specs=None, hidden=None, **kw):
continue

# Skip non-open Analyses
if analysis in submitted:
if ISubmitted.providedBy(analysis):
continue

# Remember assigned attachments
Expand All @@ -207,11 +187,9 @@ def set(self, instance, items, prices=None, specs=None, hidden=None, **kw):
if worksheet:
worksheet.removeAnalysis(analysis)

delete_ids.append(analysis.getId())

if delete_ids:
# Note: subscriber might promote the AR
instance.manage_delObjects(ids=delete_ids)
# Remove the analysis
# Note the analysis might belong to a partition
analysis.aq_parent.manage_delObjects(ids=[api.get_id(analysis)])

# Remove orphaned attachments
for attachment in assigned_attachments:
Expand All @@ -224,6 +202,108 @@ def set(self, instance, items, prices=None, specs=None, hidden=None, **kw):

return new_analyses

def add_analysis(self, instance, service, prices, hidden):
service_uid = api.get_uid(service)
new_analysis = False

# Gets the analysis or creates the analysis for this service
# Note this analysis might not belong to this current instance, but
# from a descendant (partition)
analysis = self.resolve_analysis(instance, service)
if not analysis:
# Create the analysis
new_analysis = True
keyword = service.getKeyword()
logger.info("Creating new analysis '{}'".format(keyword))
analysis = create_analysis(instance, service)

# Set the hidden status
analysis.setHidden(hidden.get(service_uid, False))

# Set the price of the Analysis
analysis.setPrice(prices.get(service_uid, service.getPrice()))

# Only return the analysis if is a new one
if new_analysis:
return analysis

return None

def resolve_analysis(self, instance, service):
"""Resolves an analysis for the service and instance
"""
# Does the analysis exists in this instance already?
analysis = self.get_from_instance(instance, service)
if analysis:
keyword = service.getKeyword()
logger.info("Analysis for '{}' already exists".format(keyword))
return analysis

# Does the analysis exists in an ancestor?
from_ancestor = self.get_from_ancestor(instance, service)
if from_ancestor:
# Move the analysis into this instance. The ancestor's
# analysis will be masked otherwise
analysis_id = api.get_id(from_ancestor)
logger.info("Analysis {} is from an ancestor".format(analysis_id))
cp = from_ancestor.aq_parent.manage_cutObjects(analysis_id)
instance.manage_pasteObjects(cp)
return instance._getOb(analysis_id)

# Does the analysis exists in a descendant?
from_descendant = self.get_from_descendant(instance, service)
if from_descendant:
# The analysis already exists in a partition, keep it. The
# analysis from current instance will be masked otherwise
analysis_id = api.get_id(from_descendant)
logger.info("Analysis {} is from a descendant".format(analysis_id))
return from_descendant

return None

def get_analyses_from_descendants(self, instance):
"""Returns all the analyses from descendants
"""
analyses = []
for descendant in instance.getDescendants(all_descendants=True):
analyses.extend(descendant.objectValues("Analysis"))
return analyses

def get_from_instance(self, instance, service):
"""Returns an analysis for the given service from the instance
"""
service_uid = api.get_uid(service)
for analysis in instance.objectValues("Analysis"):
if analysis.getServiceUID() == service_uid:
return analysis
return None

def get_from_ancestor(self, instance, service):
"""Returns an analysis for the given service from ancestors
"""
ancestor = instance.getParentAnalysisRequest()
if not ancestor:
return None

analysis = self.get_from_instance(ancestor, service)
return analysis or self.get_from_ancestor(ancestor, service)

def get_from_descendant(self, instance, service):
"""Returns an analysis for the given service from descendants
"""
for descendant in instance.getDescendants():
# Does the analysis exists in the current descendant?
analysis = self.get_from_instance(descendant, service)
if analysis:
return analysis

# Search in descendants from current descendant
analysis = self.get_from_descendant(descendant, service)
if analysis:
return analysis

return None

def _get_services(self, full_objects=False):
"""Fetch and return analysis service objects
"""
Expand Down
23 changes: 3 additions & 20 deletions bika/lims/monkey/zcatalog.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,32 +33,15 @@ def searchResults(self, REQUEST=None, used=None, **kw):
and self.id == CATALOG_ANALYSIS_LISTING:

# Fetch all analyses that have the request UID passed in as an ancestor,
# cause we want Primary ARs to always display the analyses from their
# derived ARs (if result is not empty)

# cause we want for Samples to always return the contained analyses plus
# those contained in partitions
request = REQUEST.copy()
orig_uid = request.get('getRequestUID')

# If a list of request uid, retrieve them sequentially to make the
# masking process easier
if isinstance(orig_uid, list):
results = list()
for uid in orig_uid:
request['getRequestUID'] = [uid]
results += self.searchResults(REQUEST=request, used=used, **kw)
return results

# Get all analyses, those from descendant ARs included
del request['getRequestUID']
request['getAncestorsUIDs'] = orig_uid
results = self.searchResults(REQUEST=request, used=used, **kw)

# Masking
primary = filter(lambda an: an.getParentUID == orig_uid, results)
derived = filter(lambda an: an.getParentUID != orig_uid, results)
derived_keys = map(lambda an: an.getKeyword, derived)
results = filter(lambda an: an.getKeyword not in derived_keys, primary)
return results + derived
return self.searchResults(REQUEST=request, used=used, **kw)

# Normal search
return self._catalog.searchResults(REQUEST, used, **kw)
19 changes: 1 addition & 18 deletions bika/lims/tests/doctests/ARAnalysesField.rst
Original file line number Diff line number Diff line change
Expand Up @@ -269,33 +269,16 @@ We expect to have just the `PH` Analysis again:
>>> ar.objectValues("Analysis")
[<Analysis at /plone/clients/client-1/water-0001/PH>]

Removing all Analyses is prevented, because it can not be empty:

>>> new_analyses = field.set(ar, [])
>>> ar.objectValues("Analysis")
[<Analysis at /plone/clients/client-1/water-0001/PH>]

The field can also handle UIDs of Analyses Services:

>>> service_uids = map(api.get_uid, all_services)
>>> new_analyses = field.set(ar, service_uids)

We expect again to have the `CA` and `MG` Analyses as well:

>>> sorted(new_analyses, key=methodcaller('getId'))
[<Analysis at /plone/clients/client-1/water-0001/CA>, <Analysis at /plone/clients/client-1/water-0001/MG>]

And all the three Analyses in total:
We expect again to have all the three Analyses:

>>> sorted(ar.objectValues("Analysis"), key=methodcaller("getId"))
[<Analysis at /plone/clients/client-1/water-0001/CA>, <Analysis at /plone/clients/client-1/water-0001/MG>, <Analysis at /plone/clients/client-1/water-0001/PH>]

Set again only the `PH` Analysis:

>>> new_analyses = field.set(ar, [analysisservice1])
>>> ar.objectValues("Analysis")
[<Analysis at /plone/clients/client-1/water-0001/PH>]

The field should also handle catalog brains:

>>> brains = api.search({"portal_type": "AnalysisService", "getKeyword": "CA"})
Expand Down
Loading